Jan 27 09:53:50 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 09:53:50 crc restorecon[4693]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:50 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 09:53:51 crc restorecon[4693]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 09:53:51 crc restorecon[4693]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 09:53:51 crc kubenswrapper[4869]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 09:53:51 crc kubenswrapper[4869]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 09:53:51 crc kubenswrapper[4869]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 09:53:51 crc kubenswrapper[4869]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 09:53:51 crc kubenswrapper[4869]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 09:53:51 crc kubenswrapper[4869]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.795585 4869 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805530 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805563 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805574 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805583 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805592 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805600 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805612 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805621 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805643 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805652 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805660 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805668 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805676 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805685 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805693 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805701 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805709 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805716 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805724 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805732 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805739 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805748 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805756 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805763 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805771 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805779 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805786 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805794 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805802 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805809 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805817 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805824 4869 feature_gate.go:330] unrecognized feature gate: Example Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805857 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805865 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805873 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805880 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805888 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805896 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805903 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805911 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805918 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805926 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805934 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805945 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805955 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805964 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805973 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805982 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805990 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.805998 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806007 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806017 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806026 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806034 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806042 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806050 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806057 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806065 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806073 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806081 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806088 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806098 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806108 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806118 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806126 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806134 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806144 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806154 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806162 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806171 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.806178 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807034 4869 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807054 4869 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807068 4869 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807080 4869 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807091 4869 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807101 4869 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807112 4869 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807123 4869 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807135 4869 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807144 4869 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807154 4869 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807163 4869 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807172 4869 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807181 4869 flags.go:64] FLAG: --cgroup-root="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807190 4869 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807199 4869 flags.go:64] FLAG: --client-ca-file="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807208 4869 flags.go:64] FLAG: --cloud-config="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807216 4869 flags.go:64] FLAG: --cloud-provider="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807225 4869 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807236 4869 flags.go:64] FLAG: --cluster-domain="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807245 4869 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807254 4869 flags.go:64] FLAG: --config-dir="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807263 4869 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807273 4869 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807285 4869 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807295 4869 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807304 4869 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807313 4869 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807322 4869 flags.go:64] FLAG: --contention-profiling="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807331 4869 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807340 4869 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807349 4869 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807358 4869 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807369 4869 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807380 4869 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807389 4869 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807397 4869 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807406 4869 flags.go:64] FLAG: --enable-server="true" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807415 4869 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807426 4869 flags.go:64] FLAG: --event-burst="100" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807435 4869 flags.go:64] FLAG: --event-qps="50" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807444 4869 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807453 4869 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807462 4869 flags.go:64] FLAG: --eviction-hard="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807485 4869 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807495 4869 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807503 4869 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807512 4869 flags.go:64] FLAG: --eviction-soft="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807522 4869 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807530 4869 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807540 4869 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807549 4869 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807558 4869 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807567 4869 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807575 4869 flags.go:64] FLAG: --feature-gates="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807586 4869 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807595 4869 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807605 4869 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807614 4869 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807623 4869 flags.go:64] FLAG: --healthz-port="10248" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807632 4869 flags.go:64] FLAG: --help="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807641 4869 flags.go:64] FLAG: --hostname-override="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807649 4869 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807658 4869 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807667 4869 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807676 4869 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807685 4869 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807694 4869 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807703 4869 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807712 4869 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807721 4869 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807730 4869 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807739 4869 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807748 4869 flags.go:64] FLAG: --kube-reserved="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807757 4869 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807766 4869 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807775 4869 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807783 4869 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807792 4869 flags.go:64] FLAG: --lock-file="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807800 4869 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807820 4869 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807855 4869 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807868 4869 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807877 4869 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807886 4869 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807895 4869 flags.go:64] FLAG: --logging-format="text" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807904 4869 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807914 4869 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807922 4869 flags.go:64] FLAG: --manifest-url="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807932 4869 flags.go:64] FLAG: --manifest-url-header="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807943 4869 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807952 4869 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807963 4869 flags.go:64] FLAG: --max-pods="110" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807972 4869 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807981 4869 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807990 4869 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.807998 4869 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808008 4869 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808017 4869 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808026 4869 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808044 4869 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808053 4869 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808063 4869 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808072 4869 flags.go:64] FLAG: --pod-cidr="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808080 4869 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808092 4869 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808101 4869 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808110 4869 flags.go:64] FLAG: --pods-per-core="0" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808119 4869 flags.go:64] FLAG: --port="10250" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808128 4869 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808138 4869 flags.go:64] FLAG: --provider-id="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808147 4869 flags.go:64] FLAG: --qos-reserved="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808157 4869 flags.go:64] FLAG: --read-only-port="10255" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808166 4869 flags.go:64] FLAG: --register-node="true" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808175 4869 flags.go:64] FLAG: --register-schedulable="true" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808183 4869 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808209 4869 flags.go:64] FLAG: --registry-burst="10" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808219 4869 flags.go:64] FLAG: --registry-qps="5" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808228 4869 flags.go:64] FLAG: --reserved-cpus="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808237 4869 flags.go:64] FLAG: --reserved-memory="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808252 4869 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808261 4869 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808271 4869 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808280 4869 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808288 4869 flags.go:64] FLAG: --runonce="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808297 4869 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808306 4869 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808315 4869 flags.go:64] FLAG: --seccomp-default="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808324 4869 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808333 4869 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808342 4869 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808352 4869 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808403 4869 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808413 4869 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808422 4869 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808430 4869 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808439 4869 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808448 4869 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808458 4869 flags.go:64] FLAG: --system-cgroups="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808466 4869 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808480 4869 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808489 4869 flags.go:64] FLAG: --tls-cert-file="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808498 4869 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808516 4869 flags.go:64] FLAG: --tls-min-version="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808525 4869 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808534 4869 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808543 4869 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808551 4869 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808561 4869 flags.go:64] FLAG: --v="2" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808586 4869 flags.go:64] FLAG: --version="false" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808598 4869 flags.go:64] FLAG: --vmodule="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808608 4869 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.808631 4869 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.808964 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.808978 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.808987 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.808996 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809004 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809011 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809021 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809028 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809036 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809047 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809057 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809067 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809075 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809084 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809092 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809101 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809110 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809118 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809126 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809134 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809142 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809150 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809160 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809171 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809181 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809190 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809199 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809207 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809216 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809224 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809232 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809241 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809249 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809257 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809264 4869 feature_gate.go:330] unrecognized feature gate: Example Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809283 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809291 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809299 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809307 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809315 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809322 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809330 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809344 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809352 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809360 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809367 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809375 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809383 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809390 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809399 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809406 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809414 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809421 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809429 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809437 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809445 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809453 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809461 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809469 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809477 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809486 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809496 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809504 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809513 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809522 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809529 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809537 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809544 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809552 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809560 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.809568 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.811022 4869 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.818596 4869 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.818625 4869 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818686 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818693 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818698 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818703 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818707 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818711 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818714 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818718 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818722 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818726 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818730 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818733 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818737 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818740 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818744 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818747 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818751 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818755 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818760 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818765 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818769 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818774 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818780 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818783 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818788 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818791 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818795 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818798 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818802 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818806 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818809 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818813 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818816 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818820 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818823 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818827 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818851 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818855 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818858 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818862 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818865 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818870 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818874 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818878 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818881 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818886 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818889 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818893 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818897 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818901 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818907 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818910 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818914 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818918 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818921 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818925 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818929 4869 feature_gate.go:330] unrecognized feature gate: Example Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818932 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818936 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818940 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818944 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818948 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818952 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818957 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818961 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818965 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818969 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818972 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818975 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818979 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.818982 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.818989 4869 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819106 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819111 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819115 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819120 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819123 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819128 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819134 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819138 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819143 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819149 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819154 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819160 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819165 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819169 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819173 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819176 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819180 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819183 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819187 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819190 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819194 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819197 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819201 4869 feature_gate.go:330] unrecognized feature gate: Example Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819204 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819208 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819212 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819216 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819219 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819222 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819227 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819230 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819234 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819237 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819242 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819246 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819250 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819255 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819259 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819263 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819267 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819271 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819274 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819278 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819281 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819286 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819290 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819294 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819297 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819300 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819304 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819307 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819311 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819314 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819318 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819321 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819325 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819329 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819332 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819336 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819339 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819343 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819346 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819350 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819353 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819358 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819361 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819365 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819368 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819372 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819375 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.819379 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.819385 4869 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.821032 4869 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.826911 4869 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.827047 4869 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.828817 4869 server.go:997] "Starting client certificate rotation" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.828883 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.830009 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-30 15:23:25.190909784 +0000 UTC Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.830094 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.854252 4869 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 09:53:51 crc kubenswrapper[4869]: E0127 09:53:51.857716 4869 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.858357 4869 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.869899 4869 log.go:25] "Validated CRI v1 runtime API" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.898345 4869 log.go:25] "Validated CRI v1 image API" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.902928 4869 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.909286 4869 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-09-49-39-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.909322 4869 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.929670 4869 manager.go:217] Machine: {Timestamp:2026-01-27 09:53:51.923649485 +0000 UTC m=+0.544073648 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a BootID:c689fb94-bab9-4f05-8ced-2230ba4f7ed7 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d5:d1:02 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:d5:d1:02 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:16:4d:4a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:69:96:46 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:cc:4a:a3 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:97:a6:06 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ae:66:37:99:0e:39 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:e8:45:d3:99:6d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.930083 4869 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.930375 4869 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.934817 4869 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.935218 4869 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.935279 4869 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.935616 4869 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.935638 4869 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.936458 4869 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.936510 4869 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.937954 4869 state_mem.go:36] "Initialized new in-memory state store" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.938093 4869 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.952591 4869 kubelet.go:418] "Attempting to sync node with API server" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.952637 4869 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.952684 4869 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.952703 4869 kubelet.go:324] "Adding apiserver pod source" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.952723 4869 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.963541 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.963753 4869 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 09:53:51 crc kubenswrapper[4869]: E0127 09:53:51.964218 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.963543 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 27 09:53:51 crc kubenswrapper[4869]: E0127 09:53:51.964339 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.965630 4869 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.966955 4869 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.968774 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.968798 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.968805 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.968812 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.968823 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.968844 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.968852 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.968863 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.968873 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.968880 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.968890 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.968897 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.969881 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.970293 4869 server.go:1280] "Started kubelet" Jan 27 09:53:51 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.972183 4869 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.972208 4869 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.972732 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.972754 4869 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.972852 4869 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.972871 4869 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.973021 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.973111 4869 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 09:53:51 crc kubenswrapper[4869]: E0127 09:53:51.973131 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 09:53:51 crc kubenswrapper[4869]: W0127 09:53:51.974184 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 27 09:53:51 crc kubenswrapper[4869]: E0127 09:53:51.974283 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.974371 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 03:05:56.861274298 +0000 UTC Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.974444 4869 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 09:53:51 crc kubenswrapper[4869]: E0127 09:53:51.974856 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="200ms" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.974880 4869 server.go:460] "Adding debug handlers to kubelet server" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.978225 4869 factory.go:55] Registering systemd factory Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.978273 4869 factory.go:221] Registration of the systemd container factory successfully Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.978758 4869 factory.go:153] Registering CRI-O factory Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.978887 4869 factory.go:221] Registration of the crio container factory successfully Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.979125 4869 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.979216 4869 factory.go:103] Registering Raw factory Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.979296 4869 manager.go:1196] Started watching for new ooms in manager Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.982046 4869 manager.go:319] Starting recovery of all containers Jan 27 09:53:51 crc kubenswrapper[4869]: E0127 09:53:51.981065 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e8dd0e6ee3402 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 09:53:51.970268162 +0000 UTC m=+0.590692245,LastTimestamp:2026-01-27 09:53:51.970268162 +0000 UTC m=+0.590692245,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.996186 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.996357 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.996381 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.996395 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.996430 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.996442 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.996454 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.996464 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.996561 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.996598 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.996608 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.996616 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.996625 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.996635 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.996646 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.996702 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997023 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997038 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997051 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997060 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997068 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997076 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997086 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997095 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997128 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997145 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997233 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997244 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997254 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997265 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997314 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997350 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997384 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997393 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997403 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997411 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997445 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997517 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997528 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997543 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997574 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997584 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997593 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997604 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997614 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997624 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997633 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997644 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997685 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997695 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997706 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997716 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997766 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997778 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997789 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997799 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997886 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997895 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997904 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997913 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997922 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997931 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997957 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.997967 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.998017 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.998027 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.998054 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.998062 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.998076 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.998085 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.998094 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 09:53:51 crc kubenswrapper[4869]: I0127 09:53:51.998103 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998136 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998146 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998156 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998167 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998176 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998190 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998200 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998207 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998221 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998229 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998237 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998247 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998257 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998266 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998276 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998288 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998298 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998308 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998318 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998326 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998340 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998350 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998361 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998371 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998381 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998391 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998400 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998409 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998419 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998428 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998437 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998445 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998464 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998474 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998483 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998493 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998503 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998513 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998524 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998535 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998545 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998554 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998564 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:51.998573 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003663 4869 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003712 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003727 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003739 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003750 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003761 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003770 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003781 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003790 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003799 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003810 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003822 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003862 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003873 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003883 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003894 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003904 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003914 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003923 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003932 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003944 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003953 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003962 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003977 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003987 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.003997 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004007 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004019 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004031 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004040 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004050 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004060 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004070 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004083 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004104 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004115 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004125 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004134 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004148 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004160 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004169 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004180 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004190 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004200 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004211 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004220 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004229 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004247 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.004257 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005049 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005083 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005109 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005140 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005166 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005189 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005237 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005263 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005298 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005381 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005402 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005424 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005443 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005471 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005498 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005526 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005556 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005576 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005596 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005615 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005636 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005655 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005684 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005709 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005736 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005762 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005796 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005819 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005864 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005883 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005913 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005934 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005954 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005973 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.005992 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.006011 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.006029 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.006047 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.006073 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.006091 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.006110 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.006143 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.006164 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.006186 4869 reconstruct.go:97] "Volume reconstruction finished" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.006212 4869 reconciler.go:26] "Reconciler: start to sync state" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.009382 4869 manager.go:324] Recovery completed Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.019190 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.021208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.021279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.021297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.023082 4869 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.023112 4869 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.023256 4869 state_mem.go:36] "Initialized new in-memory state store" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.029491 4869 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.031604 4869 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.031664 4869 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.031895 4869 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 09:53:52 crc kubenswrapper[4869]: E0127 09:53:52.032025 4869 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 09:53:52 crc kubenswrapper[4869]: W0127 09:53:52.032978 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 27 09:53:52 crc kubenswrapper[4869]: E0127 09:53:52.033056 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.035276 4869 policy_none.go:49] "None policy: Start" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.037432 4869 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.037478 4869 state_mem.go:35] "Initializing new in-memory state store" Jan 27 09:53:52 crc kubenswrapper[4869]: E0127 09:53:52.073507 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.094443 4869 manager.go:334] "Starting Device Plugin manager" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.094570 4869 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.094597 4869 server.go:79] "Starting device plugin registration server" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.095140 4869 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.095175 4869 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.095421 4869 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.095613 4869 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.095663 4869 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 09:53:52 crc kubenswrapper[4869]: E0127 09:53:52.102547 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.132844 4869 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.132995 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.133995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.134030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.134040 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.134145 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.134384 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.134446 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.134884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.134952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.134969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.135153 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.135290 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.135326 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.135793 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.135824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.135883 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.136481 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.136501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.136509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.136525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.136549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.136565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.136616 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.136917 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.136962 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.138497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.138528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.138500 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.138556 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.138570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.138538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.138651 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.138919 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.139000 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.139579 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.139613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.139626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.139775 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.139804 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.140133 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.140163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.140175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.140687 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.140742 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.140760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:52 crc kubenswrapper[4869]: E0127 09:53:52.175601 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="400ms" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.196063 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.197353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.197420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.197440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.197478 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 09:53:52 crc kubenswrapper[4869]: E0127 09:53:52.198217 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.209349 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.209445 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.209487 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.209523 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.209556 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.209617 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.209676 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.209733 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.209790 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.209854 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.209952 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.210027 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.210081 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.210136 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.210181 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.311547 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.311653 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.311716 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.311769 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.311811 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.311891 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.311933 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.311978 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312056 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312078 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312187 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312201 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.311972 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312234 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312085 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312273 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312311 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312347 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312381 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312409 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312452 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312340 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312483 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312415 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312596 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312681 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312771 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312867 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.312949 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.313053 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.399144 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.400693 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.400750 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.400773 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.400813 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 09:53:52 crc kubenswrapper[4869]: E0127 09:53:52.401388 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.466930 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.491793 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.532421 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.532536 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.532616 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 09:53:52 crc kubenswrapper[4869]: W0127 09:53:52.548145 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-6a0e717da413af0734f3eb40879677045fd3b2164390265189d1f573bedb9868 WatchSource:0}: Error finding container 6a0e717da413af0734f3eb40879677045fd3b2164390265189d1f573bedb9868: Status 404 returned error can't find the container with id 6a0e717da413af0734f3eb40879677045fd3b2164390265189d1f573bedb9868 Jan 27 09:53:52 crc kubenswrapper[4869]: W0127 09:53:52.561553 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-233c555e7047b60b8d05d735a742081c2685753cbd39678d8cdd20328736a42d WatchSource:0}: Error finding container 233c555e7047b60b8d05d735a742081c2685753cbd39678d8cdd20328736a42d: Status 404 returned error can't find the container with id 233c555e7047b60b8d05d735a742081c2685753cbd39678d8cdd20328736a42d Jan 27 09:53:52 crc kubenswrapper[4869]: W0127 09:53:52.569534 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-3faec6ca2ad846f78a5513e7f8583beb8e21129119e4995bfe731a93ad367a8a WatchSource:0}: Error finding container 3faec6ca2ad846f78a5513e7f8583beb8e21129119e4995bfe731a93ad367a8a: Status 404 returned error can't find the container with id 3faec6ca2ad846f78a5513e7f8583beb8e21129119e4995bfe731a93ad367a8a Jan 27 09:53:52 crc kubenswrapper[4869]: W0127 09:53:52.574643 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-4010c35e7968953c39852d7a7f2d81035dfab7af355bfda3336027eb0f434a8c WatchSource:0}: Error finding container 4010c35e7968953c39852d7a7f2d81035dfab7af355bfda3336027eb0f434a8c: Status 404 returned error can't find the container with id 4010c35e7968953c39852d7a7f2d81035dfab7af355bfda3336027eb0f434a8c Jan 27 09:53:52 crc kubenswrapper[4869]: E0127 09:53:52.576942 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="800ms" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.802273 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.804446 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.804477 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.804487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.804512 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 09:53:52 crc kubenswrapper[4869]: E0127 09:53:52.804904 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.973591 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 27 09:53:52 crc kubenswrapper[4869]: I0127 09:53:52.974627 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:52:11.397938016 +0000 UTC Jan 27 09:53:53 crc kubenswrapper[4869]: I0127 09:53:53.036080 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6a0e717da413af0734f3eb40879677045fd3b2164390265189d1f573bedb9868"} Jan 27 09:53:53 crc kubenswrapper[4869]: I0127 09:53:53.037438 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e3a6c573f1995ac66fcb98f7a3486471fe2c9094149fbe19ec7754e9640a7cdd"} Jan 27 09:53:53 crc kubenswrapper[4869]: I0127 09:53:53.038970 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4010c35e7968953c39852d7a7f2d81035dfab7af355bfda3336027eb0f434a8c"} Jan 27 09:53:53 crc kubenswrapper[4869]: I0127 09:53:53.040283 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3faec6ca2ad846f78a5513e7f8583beb8e21129119e4995bfe731a93ad367a8a"} Jan 27 09:53:53 crc kubenswrapper[4869]: I0127 09:53:53.041200 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"233c555e7047b60b8d05d735a742081c2685753cbd39678d8cdd20328736a42d"} Jan 27 09:53:53 crc kubenswrapper[4869]: W0127 09:53:53.320387 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 27 09:53:53 crc kubenswrapper[4869]: E0127 09:53:53.320513 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 27 09:53:53 crc kubenswrapper[4869]: E0127 09:53:53.378124 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="1.6s" Jan 27 09:53:53 crc kubenswrapper[4869]: W0127 09:53:53.429488 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 27 09:53:53 crc kubenswrapper[4869]: E0127 09:53:53.429580 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 27 09:53:53 crc kubenswrapper[4869]: W0127 09:53:53.525976 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 27 09:53:53 crc kubenswrapper[4869]: E0127 09:53:53.526097 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 27 09:53:53 crc kubenswrapper[4869]: W0127 09:53:53.577073 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 27 09:53:53 crc kubenswrapper[4869]: E0127 09:53:53.577179 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 27 09:53:53 crc kubenswrapper[4869]: I0127 09:53:53.605273 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:53 crc kubenswrapper[4869]: I0127 09:53:53.606717 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:53 crc kubenswrapper[4869]: I0127 09:53:53.606764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:53 crc kubenswrapper[4869]: I0127 09:53:53.606780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:53 crc kubenswrapper[4869]: I0127 09:53:53.606816 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 09:53:53 crc kubenswrapper[4869]: E0127 09:53:53.607337 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Jan 27 09:53:53 crc kubenswrapper[4869]: I0127 09:53:53.902899 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 09:53:53 crc kubenswrapper[4869]: E0127 09:53:53.904230 4869 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 27 09:53:53 crc kubenswrapper[4869]: I0127 09:53:53.974709 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 01:19:06.92442112 +0000 UTC Jan 27 09:53:53 crc kubenswrapper[4869]: I0127 09:53:53.975026 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.046537 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4" exitCode=0 Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.046655 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.046739 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4"} Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.047572 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.047614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.047627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.050419 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.051316 4869 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562" exitCode=0 Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.051407 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562"} Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.051550 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.053013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.053045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.053059 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.053642 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.053679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.053698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.053776 4869 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3" exitCode=0 Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.053958 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.053940 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3"} Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.054584 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.054618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.054630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.056904 4869 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="772664f48020be30ae006068e7a58a03ed8945a32e95eae01dec68ca47300424" exitCode=0 Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.056952 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"772664f48020be30ae006068e7a58a03ed8945a32e95eae01dec68ca47300424"} Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.057051 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.060214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.060248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.060261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.065783 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796"} Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.065827 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255"} Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.065875 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb"} Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.065891 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05"} Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.065986 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.082737 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.082801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.082823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.444284 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.974437 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 27 09:53:54 crc kubenswrapper[4869]: I0127 09:53:54.975467 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:40:54.000053509 +0000 UTC Jan 27 09:53:54 crc kubenswrapper[4869]: E0127 09:53:54.978730 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="3.2s" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.108216 4869 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372" exitCode=0 Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.108316 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372"} Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.108363 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.109407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.109448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.109459 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.111650 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d75205c7a7ad7b74b6a4c04b1f29c57d66e8899e41700cff45fbdcbc162a251f"} Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.111681 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a2175d560bd3b49088520c674e6668143955bdbeb0c8fc99c8186146ab4b733e"} Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.111696 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d421b97e5f8a27808a726111b6512ca6beb22600f7ce6b0d6b181c0c9a94c269"} Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.112656 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.113488 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.113522 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.113535 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.113861 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3da1c777979a54adf96b111ac134e777821f76fb11b8b9367e390b8c3ed1bac5"} Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.113891 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.114712 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.114752 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.114768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.117264 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058"} Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.117297 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458"} Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.117314 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3"} Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.117326 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8"} Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.117303 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.118209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.118249 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.118263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.207785 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.210278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.210314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.210327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.210358 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 09:53:55 crc kubenswrapper[4869]: E0127 09:53:55.210923 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Jan 27 09:53:55 crc kubenswrapper[4869]: W0127 09:53:55.437619 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 27 09:53:55 crc kubenswrapper[4869]: E0127 09:53:55.437687 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.813787 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 09:53:55 crc kubenswrapper[4869]: I0127 09:53:55.976273 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:00:55.352822231 +0000 UTC Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.122674 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4bba355d486861fa2bff3493e6b38e2d27853a8a618085db7307fd4baa23f3aa"} Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.122733 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.125923 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.125960 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.125972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.128334 4869 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6" exitCode=0 Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.128484 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.129482 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.129872 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6"} Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.129959 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.130386 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.131128 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.131160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.131171 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.131816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.131876 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.131890 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.132424 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.132453 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.132466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.132910 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.132934 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.132947 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.518105 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.525861 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:53:56 crc kubenswrapper[4869]: I0127 09:53:56.976549 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 01:00:46.01605463 +0000 UTC Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.136636 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de"} Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.136689 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.136734 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.136773 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.136788 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.136691 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d"} Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.137507 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1"} Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.137531 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7"} Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.137544 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.137549 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432"} Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.137901 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.137933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.137944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.137995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.138014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.138024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.138282 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.138302 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.138313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.138956 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.138983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.138992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.433133 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.977499 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:18:23.092722448 +0000 UTC Jan 27 09:53:57 crc kubenswrapper[4869]: I0127 09:53:57.986641 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.139590 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.139654 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.140875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.140933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.140942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.141068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.141126 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.141150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.227154 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.227310 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.228424 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.228493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.228513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.411343 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.412993 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.413049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.413068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.413103 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 09:53:58 crc kubenswrapper[4869]: I0127 09:53:58.978468 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 12:04:00.501502441 +0000 UTC Jan 27 09:53:59 crc kubenswrapper[4869]: I0127 09:53:59.144434 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:53:59 crc kubenswrapper[4869]: I0127 09:53:59.145960 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:53:59 crc kubenswrapper[4869]: I0127 09:53:59.145999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:53:59 crc kubenswrapper[4869]: I0127 09:53:59.146009 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:53:59 crc kubenswrapper[4869]: I0127 09:53:59.761966 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:53:59 crc kubenswrapper[4869]: I0127 09:53:59.979448 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:18:27.201362305 +0000 UTC Jan 27 09:54:00 crc kubenswrapper[4869]: I0127 09:54:00.147070 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:54:00 crc kubenswrapper[4869]: I0127 09:54:00.148148 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:00 crc kubenswrapper[4869]: I0127 09:54:00.148193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:00 crc kubenswrapper[4869]: I0127 09:54:00.148209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:00 crc kubenswrapper[4869]: I0127 09:54:00.365065 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:54:00 crc kubenswrapper[4869]: I0127 09:54:00.365327 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:54:00 crc kubenswrapper[4869]: I0127 09:54:00.367031 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:00 crc kubenswrapper[4869]: I0127 09:54:00.367061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:00 crc kubenswrapper[4869]: I0127 09:54:00.367079 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:00 crc kubenswrapper[4869]: I0127 09:54:00.466609 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 09:54:00 crc kubenswrapper[4869]: I0127 09:54:00.466792 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:54:00 crc kubenswrapper[4869]: I0127 09:54:00.467738 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:00 crc kubenswrapper[4869]: I0127 09:54:00.467773 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:00 crc kubenswrapper[4869]: I0127 09:54:00.467783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:00 crc kubenswrapper[4869]: I0127 09:54:00.980391 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:58:43.513439483 +0000 UTC Jan 27 09:54:01 crc kubenswrapper[4869]: I0127 09:54:01.050912 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:54:01 crc kubenswrapper[4869]: I0127 09:54:01.149504 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:54:01 crc kubenswrapper[4869]: I0127 09:54:01.150292 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:01 crc kubenswrapper[4869]: I0127 09:54:01.150360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:01 crc kubenswrapper[4869]: I0127 09:54:01.150383 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:01 crc kubenswrapper[4869]: I0127 09:54:01.981186 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:39:58.20543228 +0000 UTC Jan 27 09:54:02 crc kubenswrapper[4869]: E0127 09:54:02.102634 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 09:54:02 crc kubenswrapper[4869]: I0127 09:54:02.422767 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 09:54:02 crc kubenswrapper[4869]: I0127 09:54:02.423060 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:54:02 crc kubenswrapper[4869]: I0127 09:54:02.424500 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:02 crc kubenswrapper[4869]: I0127 09:54:02.424560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:02 crc kubenswrapper[4869]: I0127 09:54:02.424583 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:02 crc kubenswrapper[4869]: I0127 09:54:02.762815 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 09:54:02 crc kubenswrapper[4869]: I0127 09:54:02.762945 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 09:54:02 crc kubenswrapper[4869]: I0127 09:54:02.981574 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 15:05:54.822575391 +0000 UTC Jan 27 09:54:03 crc kubenswrapper[4869]: I0127 09:54:03.982110 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 17:47:48.276236235 +0000 UTC Jan 27 09:54:04 crc kubenswrapper[4869]: I0127 09:54:04.448692 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:54:04 crc kubenswrapper[4869]: I0127 09:54:04.448854 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:54:04 crc kubenswrapper[4869]: I0127 09:54:04.449820 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:04 crc kubenswrapper[4869]: I0127 09:54:04.449868 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:04 crc kubenswrapper[4869]: I0127 09:54:04.449878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:04 crc kubenswrapper[4869]: I0127 09:54:04.982667 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 08:57:46.631231473 +0000 UTC Jan 27 09:54:05 crc kubenswrapper[4869]: W0127 09:54:05.966700 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 09:54:05 crc kubenswrapper[4869]: I0127 09:54:05.966788 4869 trace.go:236] Trace[1400481655]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 09:53:55.966) (total time: 10000ms): Jan 27 09:54:05 crc kubenswrapper[4869]: Trace[1400481655]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (09:54:05.966) Jan 27 09:54:05 crc kubenswrapper[4869]: Trace[1400481655]: [10.000689442s] [10.000689442s] END Jan 27 09:54:05 crc kubenswrapper[4869]: E0127 09:54:05.966811 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 09:54:05 crc kubenswrapper[4869]: I0127 09:54:05.974171 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 27 09:54:05 crc kubenswrapper[4869]: I0127 09:54:05.983484 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:39:05.722474078 +0000 UTC Jan 27 09:54:06 crc kubenswrapper[4869]: W0127 09:54:06.115508 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 09:54:06 crc kubenswrapper[4869]: I0127 09:54:06.115658 4869 trace.go:236] Trace[1297923803]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 09:53:56.114) (total time: 10001ms): Jan 27 09:54:06 crc kubenswrapper[4869]: Trace[1297923803]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:54:06.115) Jan 27 09:54:06 crc kubenswrapper[4869]: Trace[1297923803]: [10.001195654s] [10.001195654s] END Jan 27 09:54:06 crc kubenswrapper[4869]: E0127 09:54:06.115687 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 09:54:06 crc kubenswrapper[4869]: W0127 09:54:06.289651 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 09:54:06 crc kubenswrapper[4869]: I0127 09:54:06.289899 4869 trace.go:236] Trace[2070917299]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 09:53:56.288) (total time: 10001ms): Jan 27 09:54:06 crc kubenswrapper[4869]: Trace[2070917299]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:54:06.289) Jan 27 09:54:06 crc kubenswrapper[4869]: Trace[2070917299]: [10.00119177s] [10.00119177s] END Jan 27 09:54:06 crc kubenswrapper[4869]: E0127 09:54:06.289921 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 09:54:06 crc kubenswrapper[4869]: I0127 09:54:06.490943 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42220->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 09:54:06 crc kubenswrapper[4869]: I0127 09:54:06.491004 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42220->192.168.126.11:17697: read: connection reset by peer" Jan 27 09:54:06 crc kubenswrapper[4869]: I0127 09:54:06.629570 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 09:54:06 crc kubenswrapper[4869]: I0127 09:54:06.629630 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 09:54:06 crc kubenswrapper[4869]: I0127 09:54:06.634196 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 09:54:06 crc kubenswrapper[4869]: I0127 09:54:06.634277 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 09:54:06 crc kubenswrapper[4869]: I0127 09:54:06.984164 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 01:39:54.470165807 +0000 UTC Jan 27 09:54:07 crc kubenswrapper[4869]: I0127 09:54:07.166071 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 09:54:07 crc kubenswrapper[4869]: I0127 09:54:07.167767 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4bba355d486861fa2bff3493e6b38e2d27853a8a618085db7307fd4baa23f3aa" exitCode=255 Jan 27 09:54:07 crc kubenswrapper[4869]: I0127 09:54:07.167815 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4bba355d486861fa2bff3493e6b38e2d27853a8a618085db7307fd4baa23f3aa"} Jan 27 09:54:07 crc kubenswrapper[4869]: I0127 09:54:07.168051 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:54:07 crc kubenswrapper[4869]: I0127 09:54:07.169255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:07 crc kubenswrapper[4869]: I0127 09:54:07.169296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:07 crc kubenswrapper[4869]: I0127 09:54:07.169313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:07 crc kubenswrapper[4869]: I0127 09:54:07.169974 4869 scope.go:117] "RemoveContainer" containerID="4bba355d486861fa2bff3493e6b38e2d27853a8a618085db7307fd4baa23f3aa" Jan 27 09:54:07 crc kubenswrapper[4869]: I0127 09:54:07.984530 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 21:42:34.35466802 +0000 UTC Jan 27 09:54:08 crc kubenswrapper[4869]: I0127 09:54:08.173516 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 09:54:08 crc kubenswrapper[4869]: I0127 09:54:08.176518 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef"} Jan 27 09:54:08 crc kubenswrapper[4869]: I0127 09:54:08.176711 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:54:08 crc kubenswrapper[4869]: I0127 09:54:08.178025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:08 crc kubenswrapper[4869]: I0127 09:54:08.178110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:08 crc kubenswrapper[4869]: I0127 09:54:08.178129 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:08 crc kubenswrapper[4869]: I0127 09:54:08.228315 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:54:08 crc kubenswrapper[4869]: I0127 09:54:08.985054 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:24:40.132212256 +0000 UTC Jan 27 09:54:09 crc kubenswrapper[4869]: I0127 09:54:09.179151 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:54:09 crc kubenswrapper[4869]: I0127 09:54:09.180296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:09 crc kubenswrapper[4869]: I0127 09:54:09.180367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:09 crc kubenswrapper[4869]: I0127 09:54:09.180387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:09 crc kubenswrapper[4869]: I0127 09:54:09.816103 4869 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 09:54:09 crc kubenswrapper[4869]: I0127 09:54:09.985726 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 09:08:58.503426598 +0000 UTC Jan 27 09:54:10 crc kubenswrapper[4869]: I0127 09:54:10.986562 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 07:19:16.08508678 +0000 UTC Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.054449 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.054761 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.055709 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.055740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.055749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.059230 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.186878 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.187768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.187794 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.187804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:11 crc kubenswrapper[4869]: E0127 09:54:11.631958 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 09:54:11 crc kubenswrapper[4869]: E0127 09:54:11.635500 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.640404 4869 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.640746 4869 trace.go:236] Trace[1261768538]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 09:53:59.138) (total time: 12502ms): Jan 27 09:54:11 crc kubenswrapper[4869]: Trace[1261768538]: ---"Objects listed" error: 12502ms (09:54:11.640) Jan 27 09:54:11 crc kubenswrapper[4869]: Trace[1261768538]: [12.502632742s] [12.502632742s] END Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.640916 4869 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.643923 4869 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.674343 4869 csr.go:261] certificate signing request csr-wqmsb is approved, waiting to be issued Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.689210 4869 csr.go:257] certificate signing request csr-wqmsb is issued Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.828338 4869 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 09:54:11 crc kubenswrapper[4869]: W0127 09:54:11.828469 4869 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.923319 4869 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.967070 4869 apiserver.go:52] "Watching apiserver" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.971869 4869 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.972156 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-bgt4x","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.972805 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.972812 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.972861 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:11 crc kubenswrapper[4869]: E0127 09:54:11.973464 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.972899 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.972953 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.972987 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bgt4x" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.972861 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 09:54:11 crc kubenswrapper[4869]: E0127 09:54:11.973638 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:11 crc kubenswrapper[4869]: E0127 09:54:11.973972 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.975953 4869 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.979473 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.979473 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.979666 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.979783 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.979994 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.980032 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.980044 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.980210 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.980333 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.980432 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.981152 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.981900 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.987209 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:47:19.781777997 +0000 UTC Jan 27 09:54:11 crc kubenswrapper[4869]: I0127 09:54:11.994420 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.004012 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.012579 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.021924 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.031909 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.041718 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.042796 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.042822 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.042849 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.042872 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.042892 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.042913 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.042933 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.042954 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.042999 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043025 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043047 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043068 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043088 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043106 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043123 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043141 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043126 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043159 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043197 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043216 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043232 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043246 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043236 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043260 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043274 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043290 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043306 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043320 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043336 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043350 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043364 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043378 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043379 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043392 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043406 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043421 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043423 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043441 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043466 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043487 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043543 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043566 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043588 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043591 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043637 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043657 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043664 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043674 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043696 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043713 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043734 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043752 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043769 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043787 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043807 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043824 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043860 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043866 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043816 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043887 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043911 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043929 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.043990 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044012 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044034 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044053 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044050 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044075 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044092 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044107 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044124 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044140 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044155 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044161 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044171 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044262 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044287 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044311 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044333 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044346 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044354 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044415 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044504 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044604 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044537 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044639 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044798 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044824 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044895 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044747 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.045065 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.045978 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.046035 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.046086 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.046245 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.046290 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.046312 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.046300 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.046380 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.046392 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.046476 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.046503 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.046525 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.046681 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.046821 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.046864 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.046897 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.046951 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047024 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047103 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047153 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047200 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047280 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047378 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047431 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047447 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047474 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047634 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047664 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047679 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047654 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.044354 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047746 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047760 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047771 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047786 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047818 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047891 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047939 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047962 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047984 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048019 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048040 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048058 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048091 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048113 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048134 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048173 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048195 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048217 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048262 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048279 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048300 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048338 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048356 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048378 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048480 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048504 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048525 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048565 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048585 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048604 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048642 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048663 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048682 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048729 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048751 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047886 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.047893 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048095 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048130 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048234 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048264 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048887 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048303 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048326 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048475 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048505 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048916 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048549 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048587 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048645 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048696 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048752 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.048776 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:54:12.548759523 +0000 UTC m=+21.169183606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.048991 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049019 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049022 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049066 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049019 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049223 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049260 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049209 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049285 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049344 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049372 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049382 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049397 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049433 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049460 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049480 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049507 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049537 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049562 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049582 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049648 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049673 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049696 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049724 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049747 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049766 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050445 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050850 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050886 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050926 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050949 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050990 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051037 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051065 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051094 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051123 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051150 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051183 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051215 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051256 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051282 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051399 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051455 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051490 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051552 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051595 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051621 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051669 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051699 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051725 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051766 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051804 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051843 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049397 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049573 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049855 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049880 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.049964 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050199 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050205 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051940 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050342 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050349 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050349 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050429 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050473 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050488 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050626 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050638 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050773 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.050950 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051060 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052041 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051142 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051187 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051407 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051883 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052353 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052391 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052423 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052458 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052485 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052515 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052544 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052567 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052592 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052616 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052640 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052669 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052695 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052727 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051652 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051642 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051709 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051778 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.051971 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052621 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052717 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052729 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052729 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.053250 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.052749 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.053556 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.053998 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.054027 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.054775 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.054965 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.055010 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.055032 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.055084 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.055096 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.055110 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.055146 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.055186 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.055215 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.055340 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.055366 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.055409 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.055541 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.055664 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056057 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056335 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056358 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056392 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056411 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056427 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056489 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056527 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056555 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056584 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056667 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056701 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056729 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056751 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056779 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056806 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056811 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056812 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056883 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056915 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056941 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056965 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.056987 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057015 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057041 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057053 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057065 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057078 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057094 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057127 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057151 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057154 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057177 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057203 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057274 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057307 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057316 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057354 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057384 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057408 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057437 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057465 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057493 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057518 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057513 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057543 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057575 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057612 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/79b770b3-2bdc-4098-97a1-10a2dd539d16-hosts-file\") pod \"node-resolver-bgt4x\" (UID: \"79b770b3-2bdc-4098-97a1-10a2dd539d16\") " pod="openshift-dns/node-resolver-bgt4x" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057641 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057668 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057697 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057722 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2jbv\" (UniqueName: \"kubernetes.io/projected/79b770b3-2bdc-4098-97a1-10a2dd539d16-kube-api-access-l2jbv\") pod \"node-resolver-bgt4x\" (UID: \"79b770b3-2bdc-4098-97a1-10a2dd539d16\") " pod="openshift-dns/node-resolver-bgt4x" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057852 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057870 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057888 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057901 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057913 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057927 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057943 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057957 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057970 4869 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057982 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057999 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058012 4869 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058024 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058039 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058054 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058067 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058079 4869 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058095 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058107 4869 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058118 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058133 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058149 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058161 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058173 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058187 4869 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058199 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058212 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058224 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058240 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058252 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058264 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058275 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058290 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058301 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059241 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059275 4869 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059290 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059316 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059331 4869 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059345 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059357 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059373 4869 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059386 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059397 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059410 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059425 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059438 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059451 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059464 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059541 4869 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059558 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059573 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059590 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059602 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059633 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059649 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059666 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059678 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059690 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059702 4869 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059718 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059730 4869 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059741 4869 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059758 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059771 4869 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059783 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059796 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059811 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059823 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059851 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059862 4869 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059877 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059888 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059899 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059911 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059926 4869 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059939 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059956 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059971 4869 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059983 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059994 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060006 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060021 4869 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060034 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060046 4869 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060056 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060060 4869 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060123 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060134 4869 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060147 4869 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060161 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060170 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060180 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060190 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060203 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060211 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060221 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060229 4869 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060244 4869 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060256 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060269 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060283 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060292 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060302 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060313 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060327 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060483 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059819 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057566 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057786 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057890 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057945 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058030 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.057954 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058198 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058237 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058251 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058591 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058625 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058890 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058907 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058910 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058929 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059096 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059021 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.058919 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059293 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059561 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.059665 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060131 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060146 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060294 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.060648 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.061048 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.061075 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.061140 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.061196 4869 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.061231 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:12.561215607 +0000 UTC m=+21.181639690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.075559 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.075761 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.075934 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.076089 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.076318 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.076430 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.075799 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.076711 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.076744 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.076867 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.076978 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.077011 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.077759 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.077815 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.077908 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:12.577892764 +0000 UTC m=+21.198316847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078048 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078063 4869 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078072 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078083 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078101 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078110 4869 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078119 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078128 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078137 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078145 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078153 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078162 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078172 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078181 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078190 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078198 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078208 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078216 4869 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078224 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078233 4869 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078243 4869 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078251 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078259 4869 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078268 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078276 4869 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078285 4869 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078293 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078302 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078310 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078585 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078614 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078908 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.078942 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.079530 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.079631 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.079641 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.079664 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.079681 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.079751 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:12.579731257 +0000 UTC m=+21.200155440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.079952 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.082051 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.087154 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.088159 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.088279 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.091302 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.092246 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.092275 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.092289 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.092367 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:12.592351456 +0000 UTC m=+21.212775539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.094617 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.096318 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.099909 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.104660 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.107385 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.107696 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.107764 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.107912 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.109118 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.109153 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.109505 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.110114 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.112665 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.113017 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.113664 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.120121 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.122944 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.128990 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.132294 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.137163 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.147527 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.148222 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-k2qh9"] Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.148574 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.149660 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-9pfwk"] Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.150408 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.150912 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.151737 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.152544 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.152744 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.153000 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-xj5gd"] Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.153309 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.153378 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.157392 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.157411 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.157550 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.157890 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.158561 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.159600 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.159214 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.161396 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.173035 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.178729 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/79b770b3-2bdc-4098-97a1-10a2dd539d16-hosts-file\") pod \"node-resolver-bgt4x\" (UID: \"79b770b3-2bdc-4098-97a1-10a2dd539d16\") " pod="openshift-dns/node-resolver-bgt4x" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.178814 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/79b770b3-2bdc-4098-97a1-10a2dd539d16-hosts-file\") pod \"node-resolver-bgt4x\" (UID: \"79b770b3-2bdc-4098-97a1-10a2dd539d16\") " pod="openshift-dns/node-resolver-bgt4x" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.178882 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2jbv\" (UniqueName: \"kubernetes.io/projected/79b770b3-2bdc-4098-97a1-10a2dd539d16-kube-api-access-l2jbv\") pod \"node-resolver-bgt4x\" (UID: \"79b770b3-2bdc-4098-97a1-10a2dd539d16\") " pod="openshift-dns/node-resolver-bgt4x" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.178931 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179007 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179324 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179399 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179425 4869 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179463 4869 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179489 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179497 4869 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179506 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179517 4869 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179527 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179536 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179544 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179551 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179560 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179568 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179576 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179586 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179595 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179604 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179614 4869 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179622 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179631 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179639 4869 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179647 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179656 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179664 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179672 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179680 4869 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179693 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179701 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179710 4869 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179718 4869 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179727 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179735 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179743 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179751 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179760 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179768 4869 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179776 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179784 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179792 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179801 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179809 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179817 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179826 4869 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179851 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179861 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179869 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179878 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179886 4869 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179894 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179902 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179910 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179918 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179926 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179934 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179943 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179951 4869 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179958 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179966 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.179974 4869 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.182522 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.190467 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.194960 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2jbv\" (UniqueName: \"kubernetes.io/projected/79b770b3-2bdc-4098-97a1-10a2dd539d16-kube-api-access-l2jbv\") pod \"node-resolver-bgt4x\" (UID: \"79b770b3-2bdc-4098-97a1-10a2dd539d16\") " pod="openshift-dns/node-resolver-bgt4x" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.198037 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.208443 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.219286 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.227648 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.234976 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.241705 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.249099 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.256704 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281151 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-run-netns\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281203 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/610cadf1-85e4-40f1-a551-998262507ca2-os-release\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281230 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-system-cni-dir\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281265 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/12a3e458-3f5f-46cf-b242-9a3986250bcf-mcd-auth-proxy-config\") pod \"machine-config-daemon-k2qh9\" (UID: \"12a3e458-3f5f-46cf-b242-9a3986250bcf\") " pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281288 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-os-release\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281313 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-var-lib-cni-bin\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281334 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-run-multus-certs\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281360 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/610cadf1-85e4-40f1-a551-998262507ca2-system-cni-dir\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281387 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsxp7\" (UniqueName: \"kubernetes.io/projected/c4e8dfa0-1849-457a-b564-4f77e534a7e0-kube-api-access-vsxp7\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281429 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-multus-socket-dir-parent\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281451 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-run-k8s-cni-cncf-io\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281471 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k25vw\" (UniqueName: \"kubernetes.io/projected/12a3e458-3f5f-46cf-b242-9a3986250bcf-kube-api-access-k25vw\") pod \"machine-config-daemon-k2qh9\" (UID: \"12a3e458-3f5f-46cf-b242-9a3986250bcf\") " pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281492 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-cnibin\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281514 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b626l\" (UniqueName: \"kubernetes.io/projected/610cadf1-85e4-40f1-a551-998262507ca2-kube-api-access-b626l\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281538 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/12a3e458-3f5f-46cf-b242-9a3986250bcf-proxy-tls\") pod \"machine-config-daemon-k2qh9\" (UID: \"12a3e458-3f5f-46cf-b242-9a3986250bcf\") " pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281565 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/610cadf1-85e4-40f1-a551-998262507ca2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281584 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/610cadf1-85e4-40f1-a551-998262507ca2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281603 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-multus-conf-dir\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281621 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-etc-kubernetes\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281644 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/610cadf1-85e4-40f1-a551-998262507ca2-cni-binary-copy\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281664 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4e8dfa0-1849-457a-b564-4f77e534a7e0-cni-binary-copy\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281684 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-var-lib-cni-multus\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281705 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-multus-cni-dir\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281723 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-hostroot\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281740 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c4e8dfa0-1849-457a-b564-4f77e534a7e0-multus-daemon-config\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281762 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/12a3e458-3f5f-46cf-b242-9a3986250bcf-rootfs\") pod \"machine-config-daemon-k2qh9\" (UID: \"12a3e458-3f5f-46cf-b242-9a3986250bcf\") " pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281775 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/610cadf1-85e4-40f1-a551-998262507ca2-cnibin\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.281789 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-var-lib-kubelet\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.290166 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 09:54:12 crc kubenswrapper[4869]: W0127 09:54:12.300828 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-c40cea59f011ea5f5d294975ab5b8253a3cd7fb262ef2497091a6cfa0bdc3512 WatchSource:0}: Error finding container c40cea59f011ea5f5d294975ab5b8253a3cd7fb262ef2497091a6cfa0bdc3512: Status 404 returned error can't find the container with id c40cea59f011ea5f5d294975ab5b8253a3cd7fb262ef2497091a6cfa0bdc3512 Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.302182 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.302350 4869 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 09:54:12 crc kubenswrapper[4869]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Jan 27 09:54:12 crc kubenswrapper[4869]: set -o allexport Jan 27 09:54:12 crc kubenswrapper[4869]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 27 09:54:12 crc kubenswrapper[4869]: source /etc/kubernetes/apiserver-url.env Jan 27 09:54:12 crc kubenswrapper[4869]: else Jan 27 09:54:12 crc kubenswrapper[4869]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 27 09:54:12 crc kubenswrapper[4869]: exit 1 Jan 27 09:54:12 crc kubenswrapper[4869]: fi Jan 27 09:54:12 crc kubenswrapper[4869]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 27 09:54:12 crc kubenswrapper[4869]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 27 09:54:12 crc kubenswrapper[4869]: > logger="UnhandledError" Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.303643 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Jan 27 09:54:12 crc kubenswrapper[4869]: W0127 09:54:12.310637 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-28e2a0690325dcc7948c6ae3a96ba700c92ce57177beabbd6827abee2d132a85 WatchSource:0}: Error finding container 28e2a0690325dcc7948c6ae3a96ba700c92ce57177beabbd6827abee2d132a85: Status 404 returned error can't find the container with id 28e2a0690325dcc7948c6ae3a96ba700c92ce57177beabbd6827abee2d132a85 Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.312586 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.312923 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.313478 4869 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.314687 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.321124 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bgt4x" Jan 27 09:54:12 crc kubenswrapper[4869]: W0127 09:54:12.321701 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-b6d2d15f13c354526df3e9972223b9a5772e04dcd746609b76f3c921196498e5 WatchSource:0}: Error finding container b6d2d15f13c354526df3e9972223b9a5772e04dcd746609b76f3c921196498e5: Status 404 returned error can't find the container with id b6d2d15f13c354526df3e9972223b9a5772e04dcd746609b76f3c921196498e5 Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382061 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-multus-cni-dir\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382102 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-hostroot\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382123 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-var-lib-kubelet\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382145 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c4e8dfa0-1849-457a-b564-4f77e534a7e0-multus-daemon-config\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382277 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/12a3e458-3f5f-46cf-b242-9a3986250bcf-rootfs\") pod \"machine-config-daemon-k2qh9\" (UID: \"12a3e458-3f5f-46cf-b242-9a3986250bcf\") " pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382301 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/610cadf1-85e4-40f1-a551-998262507ca2-cnibin\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382330 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-run-netns\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382352 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/610cadf1-85e4-40f1-a551-998262507ca2-os-release\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382375 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-system-cni-dir\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382402 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/12a3e458-3f5f-46cf-b242-9a3986250bcf-mcd-auth-proxy-config\") pod \"machine-config-daemon-k2qh9\" (UID: \"12a3e458-3f5f-46cf-b242-9a3986250bcf\") " pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382422 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-os-release\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382443 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-var-lib-cni-bin\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382462 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-run-multus-certs\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382484 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/610cadf1-85e4-40f1-a551-998262507ca2-system-cni-dir\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382504 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-run-k8s-cni-cncf-io\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382527 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsxp7\" (UniqueName: \"kubernetes.io/projected/c4e8dfa0-1849-457a-b564-4f77e534a7e0-kube-api-access-vsxp7\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382556 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-multus-socket-dir-parent\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382580 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k25vw\" (UniqueName: \"kubernetes.io/projected/12a3e458-3f5f-46cf-b242-9a3986250bcf-kube-api-access-k25vw\") pod \"machine-config-daemon-k2qh9\" (UID: \"12a3e458-3f5f-46cf-b242-9a3986250bcf\") " pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382600 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-cnibin\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382621 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b626l\" (UniqueName: \"kubernetes.io/projected/610cadf1-85e4-40f1-a551-998262507ca2-kube-api-access-b626l\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382642 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/12a3e458-3f5f-46cf-b242-9a3986250bcf-proxy-tls\") pod \"machine-config-daemon-k2qh9\" (UID: \"12a3e458-3f5f-46cf-b242-9a3986250bcf\") " pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382663 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/610cadf1-85e4-40f1-a551-998262507ca2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382685 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/610cadf1-85e4-40f1-a551-998262507ca2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382723 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-multus-conf-dir\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382748 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-etc-kubernetes\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382769 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/610cadf1-85e4-40f1-a551-998262507ca2-cni-binary-copy\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382789 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4e8dfa0-1849-457a-b564-4f77e534a7e0-cni-binary-copy\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382809 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-var-lib-cni-multus\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382896 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-var-lib-cni-multus\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382902 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/610cadf1-85e4-40f1-a551-998262507ca2-system-cni-dir\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.382941 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-run-k8s-cni-cncf-io\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.383030 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-multus-cni-dir\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.383062 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-hostroot\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.383084 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-var-lib-kubelet\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.383444 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-multus-socket-dir-parent\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.383665 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-cnibin\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.383855 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c4e8dfa0-1849-457a-b564-4f77e534a7e0-multus-daemon-config\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.383913 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/12a3e458-3f5f-46cf-b242-9a3986250bcf-rootfs\") pod \"machine-config-daemon-k2qh9\" (UID: \"12a3e458-3f5f-46cf-b242-9a3986250bcf\") " pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.383940 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/610cadf1-85e4-40f1-a551-998262507ca2-cnibin\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.383963 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-run-netns\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.384001 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/610cadf1-85e4-40f1-a551-998262507ca2-os-release\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.384031 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-system-cni-dir\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.384375 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-multus-conf-dir\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.384479 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/12a3e458-3f5f-46cf-b242-9a3986250bcf-mcd-auth-proxy-config\") pod \"machine-config-daemon-k2qh9\" (UID: \"12a3e458-3f5f-46cf-b242-9a3986250bcf\") " pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.384528 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-os-release\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.384552 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-var-lib-cni-bin\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.384578 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-host-run-multus-certs\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.384999 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4e8dfa0-1849-457a-b564-4f77e534a7e0-etc-kubernetes\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.385061 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/610cadf1-85e4-40f1-a551-998262507ca2-cni-binary-copy\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.385759 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4e8dfa0-1849-457a-b564-4f77e534a7e0-cni-binary-copy\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.385770 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/610cadf1-85e4-40f1-a551-998262507ca2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.388684 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/12a3e458-3f5f-46cf-b242-9a3986250bcf-proxy-tls\") pod \"machine-config-daemon-k2qh9\" (UID: \"12a3e458-3f5f-46cf-b242-9a3986250bcf\") " pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.388782 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/610cadf1-85e4-40f1-a551-998262507ca2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.399912 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k25vw\" (UniqueName: \"kubernetes.io/projected/12a3e458-3f5f-46cf-b242-9a3986250bcf-kube-api-access-k25vw\") pod \"machine-config-daemon-k2qh9\" (UID: \"12a3e458-3f5f-46cf-b242-9a3986250bcf\") " pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.403064 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsxp7\" (UniqueName: \"kubernetes.io/projected/c4e8dfa0-1849-457a-b564-4f77e534a7e0-kube-api-access-vsxp7\") pod \"multus-xj5gd\" (UID: \"c4e8dfa0-1849-457a-b564-4f77e534a7e0\") " pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.405042 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b626l\" (UniqueName: \"kubernetes.io/projected/610cadf1-85e4-40f1-a551-998262507ca2-kube-api-access-b626l\") pod \"multus-additional-cni-plugins-9pfwk\" (UID: \"610cadf1-85e4-40f1-a551-998262507ca2\") " pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.463189 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.470185 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.470185 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xj5gd" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.480366 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.488283 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.489058 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.498259 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-45hzs"] Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.498984 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.514078 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.517716 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.517855 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.517866 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.517921 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.518091 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.518103 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.518119 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.526800 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.542454 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.552939 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: W0127 09:54:12.559063 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod610cadf1_85e4_40f1_a551_998262507ca2.slice/crio-34c854bcfa1448085a1224d765a2681de24ff36df91c182e9ab3eb46eca15911 WatchSource:0}: Error finding container 34c854bcfa1448085a1224d765a2681de24ff36df91c182e9ab3eb46eca15911: Status 404 returned error can't find the container with id 34c854bcfa1448085a1224d765a2681de24ff36df91c182e9ab3eb46eca15911 Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.564811 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.584248 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.584374 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.584413 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.584485 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:54:13.584457935 +0000 UTC m=+22.204882028 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.584502 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.584556 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.584580 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:13.584544458 +0000 UTC m=+22.204968611 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.584495 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.584626 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.584644 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.584656 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.584660 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:13.584651981 +0000 UTC m=+22.205076184 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.584686 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:13.584678002 +0000 UTC m=+22.205102075 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.590614 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.605132 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.614812 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.626685 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.639986 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.655729 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.670771 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.672794 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686241 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-kubelet\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686663 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-ovnkube-script-lib\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686683 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-openvswitch\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686700 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-etc-openvswitch\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686714 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-systemd-units\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686729 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-cni-netd\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686745 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d38c693-da40-464a-9822-f98fb1b5ca35-ovn-node-metrics-cert\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686761 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-slash\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686778 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-var-lib-openvswitch\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686792 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-ovnkube-config\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686810 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-systemd\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686825 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-ovn\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686858 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-cni-bin\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686876 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-run-netns\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686894 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-log-socket\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686911 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686959 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl2nv\" (UniqueName: \"kubernetes.io/projected/8d38c693-da40-464a-9822-f98fb1b5ca35-kube-api-access-zl2nv\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686975 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-env-overrides\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.686988 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-run-ovn-kubernetes\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.687007 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.687023 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-node-log\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.687239 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.687254 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.687264 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:12 crc kubenswrapper[4869]: E0127 09:54:12.687300 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:13.687287976 +0000 UTC m=+22.307712059 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.692122 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 09:49:11 +0000 UTC, rotation deadline is 2026-11-17 10:10:16.29991918 +0000 UTC Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.692511 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7056h16m3.607413302s for next certificate rotation Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.712004 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.720688 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.738138 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.751333 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.762913 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.762971 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.769475 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.786234 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788047 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-node-log\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788081 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-run-ovn-kubernetes\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788112 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-kubelet\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788131 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-ovnkube-script-lib\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788150 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-openvswitch\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788165 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-etc-openvswitch\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788181 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-cni-netd\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788197 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d38c693-da40-464a-9822-f98fb1b5ca35-ovn-node-metrics-cert\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788211 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-systemd-units\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788224 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-slash\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788237 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-var-lib-openvswitch\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788252 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-ovnkube-config\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788269 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-run-netns\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788282 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-systemd\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788299 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-ovn\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788313 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-cni-bin\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788328 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-log-socket\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788351 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788385 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl2nv\" (UniqueName: \"kubernetes.io/projected/8d38c693-da40-464a-9822-f98fb1b5ca35-kube-api-access-zl2nv\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788403 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-env-overrides\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788621 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-slash\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788683 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-node-log\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788708 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-run-ovn-kubernetes\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788698 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-ovn\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788735 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-kubelet\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788840 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-var-lib-openvswitch\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788879 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-env-overrides\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788923 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-cni-bin\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788949 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-log-socket\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.788992 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.789186 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-cni-netd\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.789220 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-openvswitch\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.789242 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-etc-openvswitch\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.789313 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-systemd-units\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.789329 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-ovnkube-script-lib\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.789356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-run-netns\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.789386 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-systemd\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.789529 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-ovnkube-config\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.794619 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d38c693-da40-464a-9822-f98fb1b5ca35-ovn-node-metrics-cert\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.800323 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.811132 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl2nv\" (UniqueName: \"kubernetes.io/projected/8d38c693-da40-464a-9822-f98fb1b5ca35-kube-api-access-zl2nv\") pod \"ovnkube-node-45hzs\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.813798 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.821715 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:12 crc kubenswrapper[4869]: I0127 09:54:12.987553 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:56:48.325404979 +0000 UTC Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.198573 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c40cea59f011ea5f5d294975ab5b8253a3cd7fb262ef2497091a6cfa0bdc3512"} Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.201871 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerStarted","Data":"6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9"} Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.201945 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerStarted","Data":"c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5"} Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.201962 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerStarted","Data":"dcd493ca85c61b5a29a8fed4de2b0c23e43ba56c9c3b2f2106854a88dc3d7a3e"} Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.204470 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.205022 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.207099 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef" exitCode=255 Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.207155 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef"} Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.207234 4869 scope.go:117] "RemoveContainer" containerID="4bba355d486861fa2bff3493e6b38e2d27853a8a618085db7307fd4baa23f3aa" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.209409 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerID="2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8" exitCode=0 Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.209571 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerDied","Data":"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8"} Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.209649 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerStarted","Data":"6f55aafb7ee0fb8e804bfbc2e3ef4d7925851605ccbe20a28f745ed1365db41e"} Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.211911 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bgt4x" event={"ID":"79b770b3-2bdc-4098-97a1-10a2dd539d16","Type":"ContainerStarted","Data":"f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582"} Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.211978 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bgt4x" event={"ID":"79b770b3-2bdc-4098-97a1-10a2dd539d16","Type":"ContainerStarted","Data":"1c48a11e39e1fcafbce26d14dfe80944cb23293b1ccc910a9f2a20b57b6ad8ff"} Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.216991 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xj5gd" event={"ID":"c4e8dfa0-1849-457a-b564-4f77e534a7e0","Type":"ContainerStarted","Data":"510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a"} Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.217026 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xj5gd" event={"ID":"c4e8dfa0-1849-457a-b564-4f77e534a7e0","Type":"ContainerStarted","Data":"2b35a54e0f51bac437879d1b654944ba816c87ac11ebb548bc8a2fdd8f106361"} Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.219864 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957"} Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.219921 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435"} Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.219941 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b6d2d15f13c354526df3e9972223b9a5772e04dcd746609b76f3c921196498e5"} Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.220823 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.221339 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"28e2a0690325dcc7948c6ae3a96ba700c92ce57177beabbd6827abee2d132a85"} Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.223374 4869 generic.go:334] "Generic (PLEG): container finished" podID="610cadf1-85e4-40f1-a551-998262507ca2" containerID="917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f" exitCode=0 Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.223486 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" event={"ID":"610cadf1-85e4-40f1-a551-998262507ca2","Type":"ContainerDied","Data":"917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f"} Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.223581 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" event={"ID":"610cadf1-85e4-40f1-a551-998262507ca2","Type":"ContainerStarted","Data":"34c854bcfa1448085a1224d765a2681de24ff36df91c182e9ab3eb46eca15911"} Jan 27 09:54:13 crc kubenswrapper[4869]: E0127 09:54:13.240396 4869 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.252076 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.269122 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.277960 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.289635 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.306789 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.307390 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.307428 4869 scope.go:117] "RemoveContainer" containerID="3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef" Jan 27 09:54:13 crc kubenswrapper[4869]: E0127 09:54:13.307718 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.329802 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.349350 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.364780 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.374865 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.388265 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.401316 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.422455 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.435512 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.457551 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.471993 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.499101 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.537380 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.581340 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.599992 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.600094 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.600120 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:13 crc kubenswrapper[4869]: E0127 09:54:13.600144 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:54:15.600123123 +0000 UTC m=+24.220547206 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.600189 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:13 crc kubenswrapper[4869]: E0127 09:54:13.600219 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 09:54:13 crc kubenswrapper[4869]: E0127 09:54:13.600265 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:15.600257207 +0000 UTC m=+24.220681290 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 09:54:13 crc kubenswrapper[4869]: E0127 09:54:13.600267 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 09:54:13 crc kubenswrapper[4869]: E0127 09:54:13.600305 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:15.600296228 +0000 UTC m=+24.220720311 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 09:54:13 crc kubenswrapper[4869]: E0127 09:54:13.600221 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 09:54:13 crc kubenswrapper[4869]: E0127 09:54:13.600321 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 09:54:13 crc kubenswrapper[4869]: E0127 09:54:13.600331 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:13 crc kubenswrapper[4869]: E0127 09:54:13.600356 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:15.60034837 +0000 UTC m=+24.220772453 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.625051 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.658167 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.700488 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.700656 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:13 crc kubenswrapper[4869]: E0127 09:54:13.700775 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 09:54:13 crc kubenswrapper[4869]: E0127 09:54:13.700791 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 09:54:13 crc kubenswrapper[4869]: E0127 09:54:13.700802 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:13 crc kubenswrapper[4869]: E0127 09:54:13.700883 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:15.700866132 +0000 UTC m=+24.321290215 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.750759 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.776742 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.820802 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bba355d486861fa2bff3493e6b38e2d27853a8a618085db7307fd4baa23f3aa\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:06Z\\\",\\\"message\\\":\\\"W0127 09:53:55.302143 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 09:53:55.302579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769507635 cert, and key in /tmp/serving-cert-2515766099/serving-signer.crt, /tmp/serving-cert-2515766099/serving-signer.key\\\\nI0127 09:53:55.610683 1 observer_polling.go:159] Starting file observer\\\\nW0127 09:53:55.613015 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 09:53:55.613163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:53:55.614929 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2515766099/tls.crt::/tmp/serving-cert-2515766099/tls.key\\\\\\\"\\\\nF0127 09:54:06.486431 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:13Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.982295 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-bv4rq"] Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.982925 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bv4rq" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.985439 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.985490 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.985733 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.987601 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 09:54:13 crc kubenswrapper[4869]: I0127 09:54:13.987719 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 17:59:32.548359467 +0000 UTC Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.004706 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.017318 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.032690 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.032747 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:14 crc kubenswrapper[4869]: E0127 09:54:14.032778 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.032810 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:14 crc kubenswrapper[4869]: E0127 09:54:14.032871 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:14 crc kubenswrapper[4869]: E0127 09:54:14.032907 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.035851 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.036942 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.037584 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.038289 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.038923 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.039497 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.040949 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.041514 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.042074 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.043138 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.043673 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.044669 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.045395 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.046325 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.046870 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.047805 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.048379 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.048988 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.050651 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.051242 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.052337 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.053345 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.053981 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.054975 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.055631 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.056547 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.057274 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.057800 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.058424 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.058922 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.059879 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.060371 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.060859 4869 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.060978 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.063096 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.063658 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.064524 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.066289 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.067039 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.067952 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.068691 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.069819 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.071664 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.072395 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.073763 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.075782 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.076853 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.077542 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.078784 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.079569 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.083820 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.084629 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.085758 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.086478 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.087271 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.088471 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.098546 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.104883 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b9b78eed-8d48-4b1c-962d-35ae7b8c1468-serviceca\") pod \"node-ca-bv4rq\" (UID: \"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\") " pod="openshift-image-registry/node-ca-bv4rq" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.104938 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p6x9\" (UniqueName: \"kubernetes.io/projected/b9b78eed-8d48-4b1c-962d-35ae7b8c1468-kube-api-access-2p6x9\") pod \"node-ca-bv4rq\" (UID: \"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\") " pod="openshift-image-registry/node-ca-bv4rq" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.104975 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9b78eed-8d48-4b1c-962d-35ae7b8c1468-host\") pod \"node-ca-bv4rq\" (UID: \"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\") " pod="openshift-image-registry/node-ca-bv4rq" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.137079 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.188273 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.205974 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b9b78eed-8d48-4b1c-962d-35ae7b8c1468-serviceca\") pod \"node-ca-bv4rq\" (UID: \"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\") " pod="openshift-image-registry/node-ca-bv4rq" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.206022 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p6x9\" (UniqueName: \"kubernetes.io/projected/b9b78eed-8d48-4b1c-962d-35ae7b8c1468-kube-api-access-2p6x9\") pod \"node-ca-bv4rq\" (UID: \"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\") " pod="openshift-image-registry/node-ca-bv4rq" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.206055 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9b78eed-8d48-4b1c-962d-35ae7b8c1468-host\") pod \"node-ca-bv4rq\" (UID: \"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\") " pod="openshift-image-registry/node-ca-bv4rq" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.206106 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b9b78eed-8d48-4b1c-962d-35ae7b8c1468-host\") pod \"node-ca-bv4rq\" (UID: \"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\") " pod="openshift-image-registry/node-ca-bv4rq" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.207189 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b9b78eed-8d48-4b1c-962d-35ae7b8c1468-serviceca\") pod \"node-ca-bv4rq\" (UID: \"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\") " pod="openshift-image-registry/node-ca-bv4rq" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.214655 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.228449 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.230420 4869 scope.go:117] "RemoveContainer" containerID="3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef" Jan 27 09:54:14 crc kubenswrapper[4869]: E0127 09:54:14.230655 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.231241 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec"} Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.234118 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerStarted","Data":"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131"} Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.234156 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerStarted","Data":"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3"} Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.234166 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerStarted","Data":"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a"} Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.234174 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerStarted","Data":"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76"} Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.234184 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerStarted","Data":"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa"} Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.235692 4869 generic.go:334] "Generic (PLEG): container finished" podID="610cadf1-85e4-40f1-a551-998262507ca2" containerID="24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd" exitCode=0 Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.235760 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" event={"ID":"610cadf1-85e4-40f1-a551-998262507ca2","Type":"ContainerDied","Data":"24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd"} Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.249157 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p6x9\" (UniqueName: \"kubernetes.io/projected/b9b78eed-8d48-4b1c-962d-35ae7b8c1468-kube-api-access-2p6x9\") pod \"node-ca-bv4rq\" (UID: \"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\") " pod="openshift-image-registry/node-ca-bv4rq" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.280278 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.299072 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bv4rq" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.319483 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.363414 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.398042 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.439159 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.478658 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bba355d486861fa2bff3493e6b38e2d27853a8a618085db7307fd4baa23f3aa\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:06Z\\\",\\\"message\\\":\\\"W0127 09:53:55.302143 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 09:53:55.302579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769507635 cert, and key in /tmp/serving-cert-2515766099/serving-signer.crt, /tmp/serving-cert-2515766099/serving-signer.key\\\\nI0127 09:53:55.610683 1 observer_polling.go:159] Starting file observer\\\\nW0127 09:53:55.613015 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 09:53:55.613163 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:53:55.614929 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2515766099/tls.crt::/tmp/serving-cert-2515766099/tls.key\\\\\\\"\\\\nF0127 09:54:06.486431 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.521652 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.566509 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.602617 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.637291 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.679685 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.715535 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.758736 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.796045 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.836576 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.876992 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.916757 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.956181 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.987850 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 11:07:23.883118827 +0000 UTC Jan 27 09:54:14 crc kubenswrapper[4869]: I0127 09:54:14.996624 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:14Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.038115 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.239195 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bv4rq" event={"ID":"b9b78eed-8d48-4b1c-962d-35ae7b8c1468","Type":"ContainerStarted","Data":"6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1"} Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.239235 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bv4rq" event={"ID":"b9b78eed-8d48-4b1c-962d-35ae7b8c1468","Type":"ContainerStarted","Data":"02f33afea68d8e535defd0276e368e10a6c0c491e518a48a5722198601091211"} Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.242885 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerStarted","Data":"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874"} Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.244436 4869 generic.go:334] "Generic (PLEG): container finished" podID="610cadf1-85e4-40f1-a551-998262507ca2" containerID="827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34" exitCode=0 Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.244462 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" event={"ID":"610cadf1-85e4-40f1-a551-998262507ca2","Type":"ContainerDied","Data":"827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34"} Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.278707 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.297380 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.310609 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.337502 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.363550 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.391602 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.417755 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.432006 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.454132 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.470343 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.484455 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.517197 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.560819 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.596420 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.616750 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.616844 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.616870 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:15 crc kubenswrapper[4869]: E0127 09:54:15.616930 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:54:19.616907714 +0000 UTC m=+28.237331797 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.616981 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:15 crc kubenswrapper[4869]: E0127 09:54:15.616988 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 09:54:15 crc kubenswrapper[4869]: E0127 09:54:15.617004 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 09:54:15 crc kubenswrapper[4869]: E0127 09:54:15.617014 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 09:54:15 crc kubenswrapper[4869]: E0127 09:54:15.617029 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:15 crc kubenswrapper[4869]: E0127 09:54:15.617038 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 09:54:15 crc kubenswrapper[4869]: E0127 09:54:15.617063 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:19.617056958 +0000 UTC m=+28.237481041 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 09:54:15 crc kubenswrapper[4869]: E0127 09:54:15.617085 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:19.617069019 +0000 UTC m=+28.237493132 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:15 crc kubenswrapper[4869]: E0127 09:54:15.617104 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:19.61709641 +0000 UTC m=+28.237520583 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.637241 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.683821 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.717258 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:15 crc kubenswrapper[4869]: E0127 09:54:15.717383 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 09:54:15 crc kubenswrapper[4869]: E0127 09:54:15.717398 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 09:54:15 crc kubenswrapper[4869]: E0127 09:54:15.717409 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:15 crc kubenswrapper[4869]: E0127 09:54:15.717458 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:19.717445208 +0000 UTC m=+28.337869291 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.722731 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.758495 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.796535 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.835029 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.879310 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.915332 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.955669 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.987989 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:01:27.482947635 +0000 UTC Jan 27 09:54:15 crc kubenswrapper[4869]: I0127 09:54:15.998726 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:15Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.032407 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.032455 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.032491 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:16 crc kubenswrapper[4869]: E0127 09:54:16.032524 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:16 crc kubenswrapper[4869]: E0127 09:54:16.032673 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:16 crc kubenswrapper[4869]: E0127 09:54:16.032894 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.042561 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.076226 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.119555 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.158568 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.248210 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920"} Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.250817 4869 generic.go:334] "Generic (PLEG): container finished" podID="610cadf1-85e4-40f1-a551-998262507ca2" containerID="7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c" exitCode=0 Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.250862 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" event={"ID":"610cadf1-85e4-40f1-a551-998262507ca2","Type":"ContainerDied","Data":"7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c"} Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.273988 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.294614 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.310261 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.323235 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.358513 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.395600 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.437380 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.455236 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.455744 4869 scope.go:117] "RemoveContainer" containerID="3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef" Jan 27 09:54:16 crc kubenswrapper[4869]: E0127 09:54:16.455895 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.477167 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.522071 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.557693 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.598564 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.637446 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.678752 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.718296 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.760644 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.797194 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.835370 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.878998 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.916051 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.959683 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.988212 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:16:07.789177051 +0000 UTC Jan 27 09:54:16 crc kubenswrapper[4869]: I0127 09:54:16.998226 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:16Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.037549 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.079601 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.119035 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.162168 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.212322 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.239558 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.259634 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerStarted","Data":"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea"} Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.263017 4869 generic.go:334] "Generic (PLEG): container finished" podID="610cadf1-85e4-40f1-a551-998262507ca2" containerID="39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d" exitCode=0 Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.263051 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" event={"ID":"610cadf1-85e4-40f1-a551-998262507ca2","Type":"ContainerDied","Data":"39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d"} Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.288033 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.324148 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.363738 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.396771 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.435417 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.485739 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.517155 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.556865 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.595408 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.640349 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.677986 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.725070 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.761017 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.805487 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.839685 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:17Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:17 crc kubenswrapper[4869]: I0127 09:54:17.988622 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 20:45:30.017536714 +0000 UTC Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.033120 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.033158 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.033153 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:18 crc kubenswrapper[4869]: E0127 09:54:18.033326 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:18 crc kubenswrapper[4869]: E0127 09:54:18.033447 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:18 crc kubenswrapper[4869]: E0127 09:54:18.033564 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.036263 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.037995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.038056 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.038085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.038205 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.048314 4869 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.048686 4869 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.050880 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.050908 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.050919 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.050935 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.050946 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:18Z","lastTransitionTime":"2026-01-27T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:18 crc kubenswrapper[4869]: E0127 09:54:18.071748 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.078342 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.078392 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.078413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.078443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.078468 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:18Z","lastTransitionTime":"2026-01-27T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:18 crc kubenswrapper[4869]: E0127 09:54:18.103463 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.109861 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.109915 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.109936 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.109965 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.109987 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:18Z","lastTransitionTime":"2026-01-27T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:18 crc kubenswrapper[4869]: E0127 09:54:18.131306 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.136594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.136647 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.136669 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.136725 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.136747 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:18Z","lastTransitionTime":"2026-01-27T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:18 crc kubenswrapper[4869]: E0127 09:54:18.165472 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.169907 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.169968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.169986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.170011 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.170027 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:18Z","lastTransitionTime":"2026-01-27T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:18 crc kubenswrapper[4869]: E0127 09:54:18.185513 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: E0127 09:54:18.185631 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.187732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.187764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.187775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.187792 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.187805 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:18Z","lastTransitionTime":"2026-01-27T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.271915 4869 generic.go:334] "Generic (PLEG): container finished" podID="610cadf1-85e4-40f1-a551-998262507ca2" containerID="6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a" exitCode=0 Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.271963 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" event={"ID":"610cadf1-85e4-40f1-a551-998262507ca2","Type":"ContainerDied","Data":"6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a"} Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.298996 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.299039 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.299048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.299064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.299075 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:18Z","lastTransitionTime":"2026-01-27T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.301037 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.314235 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.327365 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.339306 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.349716 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.363396 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.382029 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.396164 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.406320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.406367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.406385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.406411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.406430 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:18Z","lastTransitionTime":"2026-01-27T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.418426 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.431807 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.443577 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.453207 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.470548 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.481581 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:18Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.511322 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.511356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.511370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.511386 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.511396 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:18Z","lastTransitionTime":"2026-01-27T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.613876 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.613917 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.613927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.613944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.613954 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:18Z","lastTransitionTime":"2026-01-27T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.716323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.716369 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.716380 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.716397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.716411 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:18Z","lastTransitionTime":"2026-01-27T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.818102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.818313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.818323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.818337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.818347 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:18Z","lastTransitionTime":"2026-01-27T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.920913 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.920947 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.920956 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.920971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.920979 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:18Z","lastTransitionTime":"2026-01-27T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:18 crc kubenswrapper[4869]: I0127 09:54:18.989584 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 00:39:57.168936839 +0000 UTC Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.023590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.023651 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.023669 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.023697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.023715 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:19Z","lastTransitionTime":"2026-01-27T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.097800 4869 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.126864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.126922 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.126940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.126963 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.126983 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:19Z","lastTransitionTime":"2026-01-27T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.229618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.229679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.229698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.229723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.229740 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:19Z","lastTransitionTime":"2026-01-27T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.283560 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerStarted","Data":"22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa"} Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.284942 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.285095 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.297361 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" event={"ID":"610cadf1-85e4-40f1-a551-998262507ca2","Type":"ContainerStarted","Data":"40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14"} Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.305248 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.316280 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.328066 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.328521 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.332701 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.332786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.332813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.332880 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.332910 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:19Z","lastTransitionTime":"2026-01-27T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.334254 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.351199 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.363878 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.376551 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.396056 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.424813 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.435722 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.435763 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.435775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.435791 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.435804 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:19Z","lastTransitionTime":"2026-01-27T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.445473 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.460272 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.481411 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.493456 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.508925 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.521699 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.535600 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.538527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.538712 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.538860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.538974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.539120 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:19Z","lastTransitionTime":"2026-01-27T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.544862 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.552460 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.566857 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.575442 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.586636 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.597676 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.610761 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.622998 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.635403 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.641395 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.641543 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.641649 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.641752 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.641864 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:19Z","lastTransitionTime":"2026-01-27T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.647683 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.666001 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.666667 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.666728 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.666758 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.666779 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:19 crc kubenswrapper[4869]: E0127 09:54:19.666894 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 09:54:19 crc kubenswrapper[4869]: E0127 09:54:19.666911 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 09:54:19 crc kubenswrapper[4869]: E0127 09:54:19.666922 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:19 crc kubenswrapper[4869]: E0127 09:54:19.666936 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 09:54:19 crc kubenswrapper[4869]: E0127 09:54:19.666947 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:54:27.666911604 +0000 UTC m=+36.287335707 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:54:19 crc kubenswrapper[4869]: E0127 09:54:19.666998 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:27.666983077 +0000 UTC m=+36.287407180 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:19 crc kubenswrapper[4869]: E0127 09:54:19.667010 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 09:54:19 crc kubenswrapper[4869]: E0127 09:54:19.667018 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:27.667008727 +0000 UTC m=+36.287432820 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 09:54:19 crc kubenswrapper[4869]: E0127 09:54:19.667040 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:27.667027458 +0000 UTC m=+36.287451541 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.678612 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.699212 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.744950 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.745216 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.745317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.745416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.745507 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:19Z","lastTransitionTime":"2026-01-27T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.767730 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.767762 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:19 crc kubenswrapper[4869]: E0127 09:54:19.767872 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 09:54:19 crc kubenswrapper[4869]: E0127 09:54:19.768108 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 09:54:19 crc kubenswrapper[4869]: E0127 09:54:19.768122 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:19 crc kubenswrapper[4869]: E0127 09:54:19.768163 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:27.768152129 +0000 UTC m=+36.388576212 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.772010 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.777201 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.781991 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.793628 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.807150 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.820229 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.833391 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.846612 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.847929 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.847986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.847995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.848008 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.848018 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:19Z","lastTransitionTime":"2026-01-27T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.879645 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.930582 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.952270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.952310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.952318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.952334 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.952345 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:19Z","lastTransitionTime":"2026-01-27T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.964849 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.989993 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 23:49:47.390369195 +0000 UTC Jan 27 09:54:19 crc kubenswrapper[4869]: I0127 09:54:19.997162 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:19Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.032622 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.032673 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.032668 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:20 crc kubenswrapper[4869]: E0127 09:54:20.032780 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:20 crc kubenswrapper[4869]: E0127 09:54:20.032851 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:20 crc kubenswrapper[4869]: E0127 09:54:20.032928 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.039520 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.054519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.054557 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.054566 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.054581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.054590 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:20Z","lastTransitionTime":"2026-01-27T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.076228 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.118324 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.154186 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.156483 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.156522 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.156530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.156547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.156556 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:20Z","lastTransitionTime":"2026-01-27T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.198480 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.241446 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.258048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.258100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.258111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.258124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.258133 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:20Z","lastTransitionTime":"2026-01-27T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.277423 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.300817 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.322915 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.355024 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.360559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.360610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.360618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.360632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.360641 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:20Z","lastTransitionTime":"2026-01-27T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.399387 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.437768 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.462663 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.462728 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.462745 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.462768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.462785 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:20Z","lastTransitionTime":"2026-01-27T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.483518 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.519791 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.558938 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.565400 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.565426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.565435 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.565476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.565485 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:20Z","lastTransitionTime":"2026-01-27T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.598090 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.637030 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.668064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.668121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.668130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.668143 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.668152 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:20Z","lastTransitionTime":"2026-01-27T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.677372 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.716683 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.756000 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.774120 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.774170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.774187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.774211 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.774224 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:20Z","lastTransitionTime":"2026-01-27T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.875911 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.875939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.875948 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.875960 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.875968 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:20Z","lastTransitionTime":"2026-01-27T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.978047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.978108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.978128 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.978151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.978168 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:20Z","lastTransitionTime":"2026-01-27T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:20 crc kubenswrapper[4869]: I0127 09:54:20.990610 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:40:32.80183085 +0000 UTC Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.080486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.080547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.080565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.080596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.080613 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:21Z","lastTransitionTime":"2026-01-27T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.182459 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.182496 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.182505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.182519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.182528 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:21Z","lastTransitionTime":"2026-01-27T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.284608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.284644 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.284654 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.284668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.284679 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:21Z","lastTransitionTime":"2026-01-27T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.303327 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.386487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.386519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.386528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.386541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.386549 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:21Z","lastTransitionTime":"2026-01-27T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.488903 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.488947 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.488957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.488973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.488982 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:21Z","lastTransitionTime":"2026-01-27T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.590989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.591031 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.591040 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.591056 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.591065 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:21Z","lastTransitionTime":"2026-01-27T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.692574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.692612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.692622 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.692636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.692645 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:21Z","lastTransitionTime":"2026-01-27T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.794137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.794170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.794179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.794192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.794202 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:21Z","lastTransitionTime":"2026-01-27T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.896096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.896136 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.896147 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.896164 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.896177 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:21Z","lastTransitionTime":"2026-01-27T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.990901 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:38:12.726059578 +0000 UTC Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.999303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.999450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.999478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.999507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:21 crc kubenswrapper[4869]: I0127 09:54:21.999528 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:21Z","lastTransitionTime":"2026-01-27T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.032872 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.032996 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.033169 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:22 crc kubenswrapper[4869]: E0127 09:54:22.033318 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:22 crc kubenswrapper[4869]: E0127 09:54:22.033403 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:22 crc kubenswrapper[4869]: E0127 09:54:22.033516 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.066718 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.089367 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.102569 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.102601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.102610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.102622 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.102632 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:22Z","lastTransitionTime":"2026-01-27T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.118755 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.140206 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.151221 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.176328 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.191212 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.203166 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.204894 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.204949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.204966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.204987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.205004 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:22Z","lastTransitionTime":"2026-01-27T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.212789 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.224556 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.235114 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.245715 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.255533 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.268424 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.280347 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.307082 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.307121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.307133 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.307149 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.307160 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:22Z","lastTransitionTime":"2026-01-27T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.307186 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovnkube-controller/0.log" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.309681 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerID="22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa" exitCode=1 Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.309713 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerDied","Data":"22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa"} Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.310334 4869 scope.go:117] "RemoveContainer" containerID="22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.320873 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.334488 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.344807 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.354255 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.365256 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.376913 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.394478 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.406803 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.409712 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.409752 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.409761 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.409774 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.409784 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:22Z","lastTransitionTime":"2026-01-27T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.428439 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:21Z\\\",\\\"message\\\":\\\"0127 09:54:21.742700 6170 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.742765 6170 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.742901 6170 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743063 6170 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743142 6170 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743189 6170 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743452 6170 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 09:54:21.743466 6170 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 09:54:21.743478 6170 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 09:54:21.743545 6170 factory.go:656] Stopping watch factory\\\\nI0127 09:54:21.743559 6170 ovnkube.go:599] Stopped ovnkube\\\\nI0127 09:54:21.743580 6170 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 09:54:21.743589 6170 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.440161 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.452978 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.463450 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.472469 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.483955 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.492850 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.511466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.511500 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.511508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.511521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.511529 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:22Z","lastTransitionTime":"2026-01-27T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.614073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.614121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.614134 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.614150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.614163 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:22Z","lastTransitionTime":"2026-01-27T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.716002 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.716045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.716057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.716073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.716084 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:22Z","lastTransitionTime":"2026-01-27T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.818050 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.818085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.818097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.818111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.818120 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:22Z","lastTransitionTime":"2026-01-27T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.920050 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.920098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.920110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.920129 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.920141 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:22Z","lastTransitionTime":"2026-01-27T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:22 crc kubenswrapper[4869]: I0127 09:54:22.991516 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 09:12:50.394807302 +0000 UTC Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.022263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.022302 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.022327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.022344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.022356 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:23Z","lastTransitionTime":"2026-01-27T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.124699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.124741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.124765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.124782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.124793 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:23Z","lastTransitionTime":"2026-01-27T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.227055 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.227085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.227094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.227109 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.227118 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:23Z","lastTransitionTime":"2026-01-27T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.314204 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovnkube-controller/1.log" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.314788 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovnkube-controller/0.log" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.317429 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerID="22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2" exitCode=1 Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.317472 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerDied","Data":"22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2"} Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.317511 4869 scope.go:117] "RemoveContainer" containerID="22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.318346 4869 scope.go:117] "RemoveContainer" containerID="22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2" Jan 27 09:54:23 crc kubenswrapper[4869]: E0127 09:54:23.318524 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-45hzs_openshift-ovn-kubernetes(8d38c693-da40-464a-9822-f98fb1b5ca35)\"" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.329819 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.329893 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.329909 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.329930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.329944 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:23Z","lastTransitionTime":"2026-01-27T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.335312 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:23Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.349871 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:23Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.359748 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:23Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.370213 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:23Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.383995 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:23Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.395772 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:23Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.411258 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:23Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.420416 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:23Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.432594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.432626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.432635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.432648 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.432657 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:23Z","lastTransitionTime":"2026-01-27T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.437705 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:21Z\\\",\\\"message\\\":\\\"0127 09:54:21.742700 6170 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.742765 6170 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.742901 6170 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743063 6170 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743142 6170 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743189 6170 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743452 6170 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 09:54:21.743466 6170 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 09:54:21.743478 6170 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 09:54:21.743545 6170 factory.go:656] Stopping watch factory\\\\nI0127 09:54:21.743559 6170 ovnkube.go:599] Stopped ovnkube\\\\nI0127 09:54:21.743580 6170 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 09:54:21.743589 6170 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:23Z\\\",\\\"message\\\":\\\", UUID:\\\\\\\"97419c58-41c7-41d7-a137-a446f0c7eeb3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.138\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 09:54:23.207083 6287 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 09:54:23.207145 6287 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:23Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.447850 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:23Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.462595 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:23Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.475122 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:23Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.486545 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:23Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.499528 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:23Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.508976 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:23Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.534321 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.534351 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.534359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.534380 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.534389 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:23Z","lastTransitionTime":"2026-01-27T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.636951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.637198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.637210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.637229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.637241 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:23Z","lastTransitionTime":"2026-01-27T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.739207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.739235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.739244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.739257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.739265 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:23Z","lastTransitionTime":"2026-01-27T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.842304 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.842355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.842371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.842393 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.842411 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:23Z","lastTransitionTime":"2026-01-27T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.945024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.945101 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.945119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.945145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.945167 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:23Z","lastTransitionTime":"2026-01-27T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:23 crc kubenswrapper[4869]: I0127 09:54:23.992477 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 21:36:00.288902795 +0000 UTC Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.032111 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:24 crc kubenswrapper[4869]: E0127 09:54:24.032244 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.032265 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.032337 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:24 crc kubenswrapper[4869]: E0127 09:54:24.032417 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:24 crc kubenswrapper[4869]: E0127 09:54:24.032469 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.049763 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.049794 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.049802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.049815 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.049825 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:24Z","lastTransitionTime":"2026-01-27T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.153612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.153666 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.153675 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.153691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.153701 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:24Z","lastTransitionTime":"2026-01-27T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.255660 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.255704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.255715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.255729 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.255737 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:24Z","lastTransitionTime":"2026-01-27T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.321625 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovnkube-controller/1.log" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.357802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.357863 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.357879 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.357895 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.357909 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:24Z","lastTransitionTime":"2026-01-27T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.424920 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x"] Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.425304 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.426750 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.426885 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.436877 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.460439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.460506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.460518 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.460553 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.460566 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:24Z","lastTransitionTime":"2026-01-27T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.470461 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.493200 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.509126 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.516563 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/df853189-32d1-44e5-8016-631a6f2880f0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xqf8x\" (UID: \"df853189-32d1-44e5-8016-631a6f2880f0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.516631 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/df853189-32d1-44e5-8016-631a6f2880f0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xqf8x\" (UID: \"df853189-32d1-44e5-8016-631a6f2880f0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.516676 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/df853189-32d1-44e5-8016-631a6f2880f0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xqf8x\" (UID: \"df853189-32d1-44e5-8016-631a6f2880f0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.516726 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2tzh\" (UniqueName: \"kubernetes.io/projected/df853189-32d1-44e5-8016-631a6f2880f0-kube-api-access-c2tzh\") pod \"ovnkube-control-plane-749d76644c-xqf8x\" (UID: \"df853189-32d1-44e5-8016-631a6f2880f0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.520028 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.536896 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.549480 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.562927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.562972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.562983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.563001 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.563011 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:24Z","lastTransitionTime":"2026-01-27T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.571220 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.583541 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.600185 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:21Z\\\",\\\"message\\\":\\\"0127 09:54:21.742700 6170 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.742765 6170 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.742901 6170 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743063 6170 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743142 6170 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743189 6170 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743452 6170 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 09:54:21.743466 6170 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 09:54:21.743478 6170 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 09:54:21.743545 6170 factory.go:656] Stopping watch factory\\\\nI0127 09:54:21.743559 6170 ovnkube.go:599] Stopped ovnkube\\\\nI0127 09:54:21.743580 6170 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 09:54:21.743589 6170 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:23Z\\\",\\\"message\\\":\\\", UUID:\\\\\\\"97419c58-41c7-41d7-a137-a446f0c7eeb3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.138\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 09:54:23.207083 6287 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 09:54:23.207145 6287 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.611500 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.617706 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/df853189-32d1-44e5-8016-631a6f2880f0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xqf8x\" (UID: \"df853189-32d1-44e5-8016-631a6f2880f0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.617766 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/df853189-32d1-44e5-8016-631a6f2880f0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xqf8x\" (UID: \"df853189-32d1-44e5-8016-631a6f2880f0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.617785 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/df853189-32d1-44e5-8016-631a6f2880f0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xqf8x\" (UID: \"df853189-32d1-44e5-8016-631a6f2880f0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.617809 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2tzh\" (UniqueName: \"kubernetes.io/projected/df853189-32d1-44e5-8016-631a6f2880f0-kube-api-access-c2tzh\") pod \"ovnkube-control-plane-749d76644c-xqf8x\" (UID: \"df853189-32d1-44e5-8016-631a6f2880f0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.618537 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/df853189-32d1-44e5-8016-631a6f2880f0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xqf8x\" (UID: \"df853189-32d1-44e5-8016-631a6f2880f0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.618651 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/df853189-32d1-44e5-8016-631a6f2880f0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xqf8x\" (UID: \"df853189-32d1-44e5-8016-631a6f2880f0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.622917 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/df853189-32d1-44e5-8016-631a6f2880f0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xqf8x\" (UID: \"df853189-32d1-44e5-8016-631a6f2880f0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.624938 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.634183 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2tzh\" (UniqueName: \"kubernetes.io/projected/df853189-32d1-44e5-8016-631a6f2880f0-kube-api-access-c2tzh\") pod \"ovnkube-control-plane-749d76644c-xqf8x\" (UID: \"df853189-32d1-44e5-8016-631a6f2880f0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.635195 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.644607 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.656002 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.664895 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:24Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.665928 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.665972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.665986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.666003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.666015 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:24Z","lastTransitionTime":"2026-01-27T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.736736 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.768540 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.768572 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.768580 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.768596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.768605 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:24Z","lastTransitionTime":"2026-01-27T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.870908 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.870949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.870959 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.870974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.870984 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:24Z","lastTransitionTime":"2026-01-27T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.973428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.973468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.973477 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.973491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.973501 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:24Z","lastTransitionTime":"2026-01-27T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:24 crc kubenswrapper[4869]: I0127 09:54:24.993655 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 11:51:24.597430794 +0000 UTC Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.076242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.076275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.076283 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.076297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.076307 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:25Z","lastTransitionTime":"2026-01-27T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.178660 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.178697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.178706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.178721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.178735 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:25Z","lastTransitionTime":"2026-01-27T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.281607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.281688 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.281710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.281740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.281759 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:25Z","lastTransitionTime":"2026-01-27T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.330652 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" event={"ID":"df853189-32d1-44e5-8016-631a6f2880f0","Type":"ContainerStarted","Data":"4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1"} Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.330767 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" event={"ID":"df853189-32d1-44e5-8016-631a6f2880f0","Type":"ContainerStarted","Data":"07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c"} Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.330787 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" event={"ID":"df853189-32d1-44e5-8016-631a6f2880f0","Type":"ContainerStarted","Data":"f979e6226e554d813361ae72f39f19f126640801bf0cd3826ca5b7cd150063e8"} Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.357311 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.373584 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.383745 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.383768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.383776 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.383789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.383797 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:25Z","lastTransitionTime":"2026-01-27T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.392721 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:21Z\\\",\\\"message\\\":\\\"0127 09:54:21.742700 6170 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.742765 6170 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.742901 6170 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743063 6170 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743142 6170 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743189 6170 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743452 6170 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 09:54:21.743466 6170 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 09:54:21.743478 6170 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 09:54:21.743545 6170 factory.go:656] Stopping watch factory\\\\nI0127 09:54:21.743559 6170 ovnkube.go:599] Stopped ovnkube\\\\nI0127 09:54:21.743580 6170 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 09:54:21.743589 6170 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:23Z\\\",\\\"message\\\":\\\", UUID:\\\\\\\"97419c58-41c7-41d7-a137-a446f0c7eeb3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.138\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 09:54:23.207083 6287 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 09:54:23.207145 6287 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.408336 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.417408 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.428753 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.438954 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.448903 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.459700 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.470327 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.482558 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.486105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.486127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.486135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.486148 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.486157 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:25Z","lastTransitionTime":"2026-01-27T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.494591 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.512450 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.524230 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.534727 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.546719 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.588382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.588427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.588439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.588455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.588467 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:25Z","lastTransitionTime":"2026-01-27T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.692225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.692274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.692287 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.692306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.692318 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:25Z","lastTransitionTime":"2026-01-27T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.795592 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.795659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.795677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.795703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.795723 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:25Z","lastTransitionTime":"2026-01-27T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.892723 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-p5frm"] Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.893149 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:25 crc kubenswrapper[4869]: E0127 09:54:25.893204 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.898212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.898250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.898262 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.898277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.898289 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:25Z","lastTransitionTime":"2026-01-27T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.912386 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.930960 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvf4w\" (UniqueName: \"kubernetes.io/projected/0bf72cba-f163-4dc2-b157-cfeb56d0177b-kube-api-access-xvf4w\") pod \"network-metrics-daemon-p5frm\" (UID: \"0bf72cba-f163-4dc2-b157-cfeb56d0177b\") " pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.931078 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs\") pod \"network-metrics-daemon-p5frm\" (UID: \"0bf72cba-f163-4dc2-b157-cfeb56d0177b\") " pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.931414 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.944393 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.955273 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.971724 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.982935 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.994215 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 05:10:28.114443867 +0000 UTC Jan 27 09:54:25 crc kubenswrapper[4869]: I0127 09:54:25.996661 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:25Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.000501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.000571 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.000591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.000617 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.000712 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:26Z","lastTransitionTime":"2026-01-27T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.011679 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.023821 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.032465 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.032510 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.032553 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:26 crc kubenswrapper[4869]: E0127 09:54:26.032613 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.032661 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvf4w\" (UniqueName: \"kubernetes.io/projected/0bf72cba-f163-4dc2-b157-cfeb56d0177b-kube-api-access-xvf4w\") pod \"network-metrics-daemon-p5frm\" (UID: \"0bf72cba-f163-4dc2-b157-cfeb56d0177b\") " pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.032749 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs\") pod \"network-metrics-daemon-p5frm\" (UID: \"0bf72cba-f163-4dc2-b157-cfeb56d0177b\") " pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:26 crc kubenswrapper[4869]: E0127 09:54:26.032793 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:26 crc kubenswrapper[4869]: E0127 09:54:26.032922 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:26 crc kubenswrapper[4869]: E0127 09:54:26.032939 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 09:54:26 crc kubenswrapper[4869]: E0127 09:54:26.033017 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs podName:0bf72cba-f163-4dc2-b157-cfeb56d0177b nodeName:}" failed. No retries permitted until 2026-01-27 09:54:26.532991089 +0000 UTC m=+35.153415192 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs") pod "network-metrics-daemon-p5frm" (UID: "0bf72cba-f163-4dc2-b157-cfeb56d0177b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.042285 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.054483 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.060705 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvf4w\" (UniqueName: \"kubernetes.io/projected/0bf72cba-f163-4dc2-b157-cfeb56d0177b-kube-api-access-xvf4w\") pod \"network-metrics-daemon-p5frm\" (UID: \"0bf72cba-f163-4dc2-b157-cfeb56d0177b\") " pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.067090 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.084253 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.103047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.103104 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.103116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.103134 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.103148 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:26Z","lastTransitionTime":"2026-01-27T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.109042 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.121501 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.137870 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:21Z\\\",\\\"message\\\":\\\"0127 09:54:21.742700 6170 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.742765 6170 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.742901 6170 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743063 6170 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743142 6170 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743189 6170 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743452 6170 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 09:54:21.743466 6170 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 09:54:21.743478 6170 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 09:54:21.743545 6170 factory.go:656] Stopping watch factory\\\\nI0127 09:54:21.743559 6170 ovnkube.go:599] Stopped ovnkube\\\\nI0127 09:54:21.743580 6170 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 09:54:21.743589 6170 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:23Z\\\",\\\"message\\\":\\\", UUID:\\\\\\\"97419c58-41c7-41d7-a137-a446f0c7eeb3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.138\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 09:54:23.207083 6287 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 09:54:23.207145 6287 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.148936 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:26Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.205905 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.205949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.205965 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.205981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.205991 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:26Z","lastTransitionTime":"2026-01-27T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.308471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.308507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.308515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.308528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.308542 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:26Z","lastTransitionTime":"2026-01-27T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.411160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.411192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.411208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.411223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.411232 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:26Z","lastTransitionTime":"2026-01-27T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.514478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.514521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.514530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.514543 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.514552 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:26Z","lastTransitionTime":"2026-01-27T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.537609 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs\") pod \"network-metrics-daemon-p5frm\" (UID: \"0bf72cba-f163-4dc2-b157-cfeb56d0177b\") " pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:26 crc kubenswrapper[4869]: E0127 09:54:26.537781 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 09:54:26 crc kubenswrapper[4869]: E0127 09:54:26.537928 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs podName:0bf72cba-f163-4dc2-b157-cfeb56d0177b nodeName:}" failed. No retries permitted until 2026-01-27 09:54:27.537899673 +0000 UTC m=+36.158323796 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs") pod "network-metrics-daemon-p5frm" (UID: "0bf72cba-f163-4dc2-b157-cfeb56d0177b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.616766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.616824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.616866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.616889 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.616909 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:26Z","lastTransitionTime":"2026-01-27T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.719576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.719610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.719623 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.719640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.719654 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:26Z","lastTransitionTime":"2026-01-27T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.827048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.827113 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.827148 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.827182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.827208 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:26Z","lastTransitionTime":"2026-01-27T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.929786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.929827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.929856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.929871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.929882 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:26Z","lastTransitionTime":"2026-01-27T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:26 crc kubenswrapper[4869]: I0127 09:54:26.995435 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:17:50.68519971 +0000 UTC Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.032674 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.033027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.033166 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.033293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.033400 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:27Z","lastTransitionTime":"2026-01-27T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.137168 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.137249 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.137273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.137301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.137321 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:27Z","lastTransitionTime":"2026-01-27T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.240118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.240154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.240164 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.240179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.240191 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:27Z","lastTransitionTime":"2026-01-27T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.342870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.342909 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.342920 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.342944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.342956 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:27Z","lastTransitionTime":"2026-01-27T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.446008 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.446071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.446095 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.446128 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.446151 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:27Z","lastTransitionTime":"2026-01-27T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.548045 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs\") pod \"network-metrics-daemon-p5frm\" (UID: \"0bf72cba-f163-4dc2-b157-cfeb56d0177b\") " pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:27 crc kubenswrapper[4869]: E0127 09:54:27.548348 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 09:54:27 crc kubenswrapper[4869]: E0127 09:54:27.548455 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs podName:0bf72cba-f163-4dc2-b157-cfeb56d0177b nodeName:}" failed. No retries permitted until 2026-01-27 09:54:29.54842919 +0000 UTC m=+38.168853313 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs") pod "network-metrics-daemon-p5frm" (UID: "0bf72cba-f163-4dc2-b157-cfeb56d0177b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.549257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.549396 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.549490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.549579 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.549653 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:27Z","lastTransitionTime":"2026-01-27T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.652429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.652513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.652536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.652570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.652594 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:27Z","lastTransitionTime":"2026-01-27T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.749692 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:54:27 crc kubenswrapper[4869]: E0127 09:54:27.749822 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:54:43.749798056 +0000 UTC m=+52.370222149 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.750241 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:27 crc kubenswrapper[4869]: E0127 09:54:27.750391 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 09:54:27 crc kubenswrapper[4869]: E0127 09:54:27.750463 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:43.750447965 +0000 UTC m=+52.370872058 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.750717 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.751308 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:27 crc kubenswrapper[4869]: E0127 09:54:27.751056 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 09:54:27 crc kubenswrapper[4869]: E0127 09:54:27.751439 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 09:54:27 crc kubenswrapper[4869]: E0127 09:54:27.751742 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 09:54:27 crc kubenswrapper[4869]: E0127 09:54:27.751762 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:27 crc kubenswrapper[4869]: E0127 09:54:27.751690 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:43.7516727 +0000 UTC m=+52.372096793 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 09:54:27 crc kubenswrapper[4869]: E0127 09:54:27.751879 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:43.751859756 +0000 UTC m=+52.372283849 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.756954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.756992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.757007 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.757062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.757076 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:27Z","lastTransitionTime":"2026-01-27T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.851977 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:27 crc kubenswrapper[4869]: E0127 09:54:27.852202 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 09:54:27 crc kubenswrapper[4869]: E0127 09:54:27.852327 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 09:54:27 crc kubenswrapper[4869]: E0127 09:54:27.852419 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:27 crc kubenswrapper[4869]: E0127 09:54:27.852522 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 09:54:43.852507863 +0000 UTC m=+52.472931946 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.858791 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.858825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.858853 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.858867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.858876 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:27Z","lastTransitionTime":"2026-01-27T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.961574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.961637 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.961659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.961688 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.961712 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:27Z","lastTransitionTime":"2026-01-27T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:27 crc kubenswrapper[4869]: I0127 09:54:27.996035 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 17:55:55.837776824 +0000 UTC Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.032525 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.032646 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:28 crc kubenswrapper[4869]: E0127 09:54:28.032824 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.032969 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:28 crc kubenswrapper[4869]: E0127 09:54:28.033011 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.033077 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:28 crc kubenswrapper[4869]: E0127 09:54:28.033155 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:28 crc kubenswrapper[4869]: E0127 09:54:28.033222 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.064878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.064924 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.064940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.064959 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.064973 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:28Z","lastTransitionTime":"2026-01-27T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.167978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.168090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.168104 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.168120 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.168148 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:28Z","lastTransitionTime":"2026-01-27T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.270997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.271049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.271065 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.271090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.271107 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:28Z","lastTransitionTime":"2026-01-27T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.364104 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.364167 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.364191 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.364223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.364248 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:28Z","lastTransitionTime":"2026-01-27T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:28 crc kubenswrapper[4869]: E0127 09:54:28.379779 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.383976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.384035 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.384059 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.384090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.384112 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:28Z","lastTransitionTime":"2026-01-27T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:28 crc kubenswrapper[4869]: E0127 09:54:28.405751 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.410403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.410434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.410469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.410506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.410517 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:28Z","lastTransitionTime":"2026-01-27T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:28 crc kubenswrapper[4869]: E0127 09:54:28.430559 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.434614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.434690 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.434713 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.434743 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.434763 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:28Z","lastTransitionTime":"2026-01-27T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:28 crc kubenswrapper[4869]: E0127 09:54:28.461939 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.465975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.466004 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.466013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.466027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.466036 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:28Z","lastTransitionTime":"2026-01-27T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:28 crc kubenswrapper[4869]: E0127 09:54:28.484057 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:28Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:28 crc kubenswrapper[4869]: E0127 09:54:28.484219 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.485731 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.485785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.485794 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.485809 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.485818 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:28Z","lastTransitionTime":"2026-01-27T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.588796 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.588884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.588900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.588915 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.588924 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:28Z","lastTransitionTime":"2026-01-27T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.691371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.691423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.691438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.691506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.691523 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:28Z","lastTransitionTime":"2026-01-27T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.793945 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.793975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.793983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.793995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.794003 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:28Z","lastTransitionTime":"2026-01-27T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.896596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.896639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.896650 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.896667 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.896678 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:28Z","lastTransitionTime":"2026-01-27T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.996623 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:26:09.902890214 +0000 UTC Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.999517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.999585 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.999603 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.999629 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:28 crc kubenswrapper[4869]: I0127 09:54:28.999648 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:28Z","lastTransitionTime":"2026-01-27T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.102744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.102797 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.102809 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.102827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.102868 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:29Z","lastTransitionTime":"2026-01-27T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.205799 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.205924 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.205953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.205997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.206021 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:29Z","lastTransitionTime":"2026-01-27T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.308942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.308990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.309008 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.309036 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.309054 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:29Z","lastTransitionTime":"2026-01-27T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.411932 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.411976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.411987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.412004 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.412015 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:29Z","lastTransitionTime":"2026-01-27T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.516340 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.516384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.516397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.516415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.516458 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:29Z","lastTransitionTime":"2026-01-27T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.571997 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs\") pod \"network-metrics-daemon-p5frm\" (UID: \"0bf72cba-f163-4dc2-b157-cfeb56d0177b\") " pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:29 crc kubenswrapper[4869]: E0127 09:54:29.572216 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 09:54:29 crc kubenswrapper[4869]: E0127 09:54:29.572401 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs podName:0bf72cba-f163-4dc2-b157-cfeb56d0177b nodeName:}" failed. No retries permitted until 2026-01-27 09:54:33.572369788 +0000 UTC m=+42.192793951 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs") pod "network-metrics-daemon-p5frm" (UID: "0bf72cba-f163-4dc2-b157-cfeb56d0177b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.619226 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.619297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.619321 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.619354 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.619377 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:29Z","lastTransitionTime":"2026-01-27T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.722198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.722263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.722281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.722308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.722325 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:29Z","lastTransitionTime":"2026-01-27T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.825306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.825345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.825358 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.825374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.825386 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:29Z","lastTransitionTime":"2026-01-27T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.927450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.927489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.927502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.927516 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.927527 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:29Z","lastTransitionTime":"2026-01-27T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:29 crc kubenswrapper[4869]: I0127 09:54:29.996773 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 00:23:42.325063955 +0000 UTC Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.030142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.030173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.030181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.030193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.030201 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:30Z","lastTransitionTime":"2026-01-27T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.032481 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.032484 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.032533 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.032670 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:30 crc kubenswrapper[4869]: E0127 09:54:30.032661 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:30 crc kubenswrapper[4869]: E0127 09:54:30.032734 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:54:30 crc kubenswrapper[4869]: E0127 09:54:30.032776 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:30 crc kubenswrapper[4869]: E0127 09:54:30.032811 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.133375 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.133439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.133460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.133491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.133513 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:30Z","lastTransitionTime":"2026-01-27T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.236167 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.236231 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.236254 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.236285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.236308 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:30Z","lastTransitionTime":"2026-01-27T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.338765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.338801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.338810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.338826 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.338858 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:30Z","lastTransitionTime":"2026-01-27T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.441034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.441107 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.441130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.441162 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.441183 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:30Z","lastTransitionTime":"2026-01-27T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.543725 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.543774 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.543785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.543804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.543816 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:30Z","lastTransitionTime":"2026-01-27T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.646436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.646485 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.646498 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.646515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.646527 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:30Z","lastTransitionTime":"2026-01-27T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.748808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.748873 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.748884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.748899 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.748909 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:30Z","lastTransitionTime":"2026-01-27T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.851724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.851769 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.851780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.851796 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.851807 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:30Z","lastTransitionTime":"2026-01-27T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.953535 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.953582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.953594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.953612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.953626 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:30Z","lastTransitionTime":"2026-01-27T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:30 crc kubenswrapper[4869]: I0127 09:54:30.997587 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 06:14:06.232391467 +0000 UTC Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.056453 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.056741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.056752 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.056768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.056779 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:31Z","lastTransitionTime":"2026-01-27T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.159425 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.159493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.159501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.159517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.159526 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:31Z","lastTransitionTime":"2026-01-27T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.262083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.262128 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.262140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.262159 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.262174 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:31Z","lastTransitionTime":"2026-01-27T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.364435 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.364487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.364501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.364517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.364547 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:31Z","lastTransitionTime":"2026-01-27T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.467227 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.467291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.467307 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.467327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.467343 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:31Z","lastTransitionTime":"2026-01-27T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.569756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.569810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.569852 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.569875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.569891 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:31Z","lastTransitionTime":"2026-01-27T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.672301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.672355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.672371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.672392 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.672409 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:31Z","lastTransitionTime":"2026-01-27T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.775019 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.775061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.775072 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.775088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.775098 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:31Z","lastTransitionTime":"2026-01-27T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.876797 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.876885 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.876904 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.876920 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.876931 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:31Z","lastTransitionTime":"2026-01-27T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.979276 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.979308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.979316 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.979328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.979337 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:31Z","lastTransitionTime":"2026-01-27T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:31 crc kubenswrapper[4869]: I0127 09:54:31.997865 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 06:28:10.246407586 +0000 UTC Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.032261 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.032414 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.032459 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.032434 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:32 crc kubenswrapper[4869]: E0127 09:54:32.032479 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:54:32 crc kubenswrapper[4869]: E0127 09:54:32.032770 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:32 crc kubenswrapper[4869]: E0127 09:54:32.032851 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:32 crc kubenswrapper[4869]: E0127 09:54:32.033017 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.033285 4869 scope.go:117] "RemoveContainer" containerID="3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.043991 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.064624 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.081244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.081287 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.081298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.081313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.081325 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:32Z","lastTransitionTime":"2026-01-27T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.082328 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.096094 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.105189 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.118780 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.129284 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.140787 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.153285 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.164634 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.175915 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.184351 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.184391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.184402 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.184418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.184429 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:32Z","lastTransitionTime":"2026-01-27T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.186242 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.199959 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.216912 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.229277 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.247917 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:21Z\\\",\\\"message\\\":\\\"0127 09:54:21.742700 6170 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.742765 6170 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.742901 6170 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743063 6170 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743142 6170 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743189 6170 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743452 6170 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 09:54:21.743466 6170 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 09:54:21.743478 6170 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 09:54:21.743545 6170 factory.go:656] Stopping watch factory\\\\nI0127 09:54:21.743559 6170 ovnkube.go:599] Stopped ovnkube\\\\nI0127 09:54:21.743580 6170 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 09:54:21.743589 6170 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:23Z\\\",\\\"message\\\":\\\", UUID:\\\\\\\"97419c58-41c7-41d7-a137-a446f0c7eeb3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.138\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 09:54:23.207083 6287 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 09:54:23.207145 6287 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.259206 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.287437 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.287470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.287478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.287491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.287500 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:32Z","lastTransitionTime":"2026-01-27T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.356792 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.358193 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4"} Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.358950 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.370207 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.381932 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.389367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.389401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.389409 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.389422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.389431 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:32Z","lastTransitionTime":"2026-01-27T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.392439 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.402850 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.415884 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.425066 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.436263 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.447917 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.458901 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.468157 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.480158 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.490005 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.491312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.491340 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.491351 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.491366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.491374 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:32Z","lastTransitionTime":"2026-01-27T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.505529 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.522032 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.535743 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.551726 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22e21225081a7c5d8a7ee6db711f82670c29d5688a5445f6b7c47170804b37fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:21Z\\\",\\\"message\\\":\\\"0127 09:54:21.742700 6170 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.742765 6170 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.742901 6170 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743063 6170 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743142 6170 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743189 6170 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:54:21.743452 6170 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 09:54:21.743466 6170 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 09:54:21.743478 6170 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 09:54:21.743545 6170 factory.go:656] Stopping watch factory\\\\nI0127 09:54:21.743559 6170 ovnkube.go:599] Stopped ovnkube\\\\nI0127 09:54:21.743580 6170 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 09:54:21.743589 6170 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:23Z\\\",\\\"message\\\":\\\", UUID:\\\\\\\"97419c58-41c7-41d7-a137-a446f0c7eeb3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.138\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 09:54:23.207083 6287 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 09:54:23.207145 6287 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.560744 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:32Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.593899 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.593946 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.593956 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.593972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.593983 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:32Z","lastTransitionTime":"2026-01-27T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.696212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.696243 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.696250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.696264 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.696271 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:32Z","lastTransitionTime":"2026-01-27T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.798444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.798482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.798494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.798509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.798520 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:32Z","lastTransitionTime":"2026-01-27T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.901277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.901321 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.901330 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.901345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.901355 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:32Z","lastTransitionTime":"2026-01-27T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:32 crc kubenswrapper[4869]: I0127 09:54:32.998296 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:27:18.102139104 +0000 UTC Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.003650 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.003684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.003692 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.003705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.003714 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:33Z","lastTransitionTime":"2026-01-27T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.106413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.106465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.106475 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.106496 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.106507 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:33Z","lastTransitionTime":"2026-01-27T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.208208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.208241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.208250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.208265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.208274 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:33Z","lastTransitionTime":"2026-01-27T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.310269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.310305 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.310316 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.310334 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.310344 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:33Z","lastTransitionTime":"2026-01-27T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.412817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.412869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.412880 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.412896 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.412908 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:33Z","lastTransitionTime":"2026-01-27T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.515253 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.515459 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.515579 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.515679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.515774 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:33Z","lastTransitionTime":"2026-01-27T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.611263 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs\") pod \"network-metrics-daemon-p5frm\" (UID: \"0bf72cba-f163-4dc2-b157-cfeb56d0177b\") " pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:33 crc kubenswrapper[4869]: E0127 09:54:33.611391 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 09:54:33 crc kubenswrapper[4869]: E0127 09:54:33.611741 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs podName:0bf72cba-f163-4dc2-b157-cfeb56d0177b nodeName:}" failed. No retries permitted until 2026-01-27 09:54:41.611686526 +0000 UTC m=+50.232110609 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs") pod "network-metrics-daemon-p5frm" (UID: "0bf72cba-f163-4dc2-b157-cfeb56d0177b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.617975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.618014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.618023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.618037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.618051 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:33Z","lastTransitionTime":"2026-01-27T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.720007 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.720039 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.720047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.720060 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.720070 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:33Z","lastTransitionTime":"2026-01-27T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.821989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.822267 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.822363 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.822436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.822494 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:33Z","lastTransitionTime":"2026-01-27T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.925384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.925449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.925460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.925478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.925491 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:33Z","lastTransitionTime":"2026-01-27T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:33 crc kubenswrapper[4869]: I0127 09:54:33.999126 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:45:07.242674661 +0000 UTC Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.028149 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.028184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.028193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.028212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.028222 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:34Z","lastTransitionTime":"2026-01-27T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.032445 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.032526 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.032470 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.032469 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:34 crc kubenswrapper[4869]: E0127 09:54:34.032593 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:54:34 crc kubenswrapper[4869]: E0127 09:54:34.032689 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:34 crc kubenswrapper[4869]: E0127 09:54:34.032752 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:34 crc kubenswrapper[4869]: E0127 09:54:34.032955 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.130468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.130503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.130512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.130529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.130555 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:34Z","lastTransitionTime":"2026-01-27T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.232671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.232706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.232717 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.232733 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.232744 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:34Z","lastTransitionTime":"2026-01-27T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.335233 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.335296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.335311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.335333 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.335347 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:34Z","lastTransitionTime":"2026-01-27T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.439066 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.439113 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.439122 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.439141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.439151 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:34Z","lastTransitionTime":"2026-01-27T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.542272 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.542328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.542339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.542355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.542366 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:34Z","lastTransitionTime":"2026-01-27T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.645393 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.645433 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.645442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.645457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.645468 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:34Z","lastTransitionTime":"2026-01-27T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.748089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.748132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.748140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.748154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.748163 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:34Z","lastTransitionTime":"2026-01-27T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.850608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.850639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.850648 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.850662 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.850672 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:34Z","lastTransitionTime":"2026-01-27T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.952526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.952560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.952571 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.952589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.952599 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:34Z","lastTransitionTime":"2026-01-27T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:34 crc kubenswrapper[4869]: I0127 09:54:34.999570 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 20:03:21.552973735 +0000 UTC Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.054852 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.054884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.054894 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.054908 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.054916 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:35Z","lastTransitionTime":"2026-01-27T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.157263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.157296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.157305 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.157320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.157329 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:35Z","lastTransitionTime":"2026-01-27T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.259469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.259508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.259519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.259534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.259545 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:35Z","lastTransitionTime":"2026-01-27T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.361892 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.362502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.362541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.362550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.362564 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.362573 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:35Z","lastTransitionTime":"2026-01-27T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.362638 4869 scope.go:117] "RemoveContainer" containerID="22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.373173 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:35Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.387123 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:35Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.399401 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:35Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.409772 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:35Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.420424 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:35Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.433438 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:35Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.443593 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:35Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.454404 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:35Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.464644 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.464676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.464685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.464697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.464706 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:35Z","lastTransitionTime":"2026-01-27T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.466001 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:35Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.480661 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:35Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.528479 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:35Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.544666 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:35Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.559486 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:35Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.566937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.566968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.566981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.566996 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.567007 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:35Z","lastTransitionTime":"2026-01-27T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.577146 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:35Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.589956 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:35Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.605784 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:23Z\\\",\\\"message\\\":\\\", UUID:\\\\\\\"97419c58-41c7-41d7-a137-a446f0c7eeb3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.138\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 09:54:23.207083 6287 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 09:54:23.207145 6287 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-45hzs_openshift-ovn-kubernetes(8d38c693-da40-464a-9822-f98fb1b5ca35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:35Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.615012 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:35Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.669016 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.669073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.669090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.669113 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.669130 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:35Z","lastTransitionTime":"2026-01-27T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.771671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.771710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.771721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.771736 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.771748 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:35Z","lastTransitionTime":"2026-01-27T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.876117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.876539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.876560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.876578 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.876591 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:35Z","lastTransitionTime":"2026-01-27T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.978948 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.978986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.978995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.979010 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:35 crc kubenswrapper[4869]: I0127 09:54:35.979019 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:35Z","lastTransitionTime":"2026-01-27T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.000410 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 00:42:10.066342123 +0000 UTC Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.032772 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.032796 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.032770 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.032771 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:36 crc kubenswrapper[4869]: E0127 09:54:36.032911 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:36 crc kubenswrapper[4869]: E0127 09:54:36.033074 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:36 crc kubenswrapper[4869]: E0127 09:54:36.033163 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:36 crc kubenswrapper[4869]: E0127 09:54:36.033252 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.081143 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.081186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.081196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.081212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.081221 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:36Z","lastTransitionTime":"2026-01-27T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.183526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.183566 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.183575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.183589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.183600 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:36Z","lastTransitionTime":"2026-01-27T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.285671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.285707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.285716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.285730 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.285739 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:36Z","lastTransitionTime":"2026-01-27T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.373022 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovnkube-controller/2.log" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.373622 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovnkube-controller/1.log" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.377219 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerID="e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89" exitCode=1 Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.377257 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerDied","Data":"e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89"} Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.377291 4869 scope.go:117] "RemoveContainer" containerID="22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.378066 4869 scope.go:117] "RemoveContainer" containerID="e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89" Jan 27 09:54:36 crc kubenswrapper[4869]: E0127 09:54:36.378254 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-45hzs_openshift-ovn-kubernetes(8d38c693-da40-464a-9822-f98fb1b5ca35)\"" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.389442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.389481 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.389492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.389508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.389520 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:36Z","lastTransitionTime":"2026-01-27T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.392517 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.403878 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.424160 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.435926 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.453138 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:23Z\\\",\\\"message\\\":\\\", UUID:\\\\\\\"97419c58-41c7-41d7-a137-a446f0c7eeb3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.138\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 09:54:23.207083 6287 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 09:54:23.207145 6287 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:36Z\\\",\\\"message\\\":\\\"p_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 09:54:36.147597 6525 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0127 09:54:36.147615 6525 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 2.274197ms\\\\nI0127 09:54:36.147635 6525 services_controller.go:356] Processing sync for service openshift-etcd/etcd for network=default\\\\nI0127 09:54:36.147646 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 09:54:36.147453 6525 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0127 09:54:36.147731 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.467536 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.477700 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.490212 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.492145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.492189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.492204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.492224 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.492237 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:36Z","lastTransitionTime":"2026-01-27T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.503001 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.513268 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.522688 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.535164 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.547901 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.561522 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.573381 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.586662 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.598058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.598088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.598096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.598111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.598120 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:36Z","lastTransitionTime":"2026-01-27T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.601898 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:36Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.699856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.699901 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.699911 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.699925 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.699935 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:36Z","lastTransitionTime":"2026-01-27T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.802557 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.802613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.802633 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.802654 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.802670 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:36Z","lastTransitionTime":"2026-01-27T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.904370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.904447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.904479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.904511 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:36 crc kubenswrapper[4869]: I0127 09:54:36.904531 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:36Z","lastTransitionTime":"2026-01-27T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.001195 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:06:01.954005202 +0000 UTC Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.007429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.007479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.007501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.007530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.007551 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:37Z","lastTransitionTime":"2026-01-27T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.110705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.110740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.110748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.110761 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.110787 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:37Z","lastTransitionTime":"2026-01-27T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.213283 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.213343 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.213360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.213385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.213404 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:37Z","lastTransitionTime":"2026-01-27T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.315415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.315457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.315468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.315483 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.315494 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:37Z","lastTransitionTime":"2026-01-27T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.382228 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovnkube-controller/2.log" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.418217 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.418293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.418317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.418342 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.418360 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:37Z","lastTransitionTime":"2026-01-27T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.521121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.521384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.521504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.521607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.521705 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:37Z","lastTransitionTime":"2026-01-27T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.624008 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.624297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.624362 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.624433 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.624493 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:37Z","lastTransitionTime":"2026-01-27T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.726985 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.727235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.727388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.727459 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.727558 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:37Z","lastTransitionTime":"2026-01-27T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.830478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.830533 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.830545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.830563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.830573 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:37Z","lastTransitionTime":"2026-01-27T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.935472 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.935537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.935554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.935579 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:37 crc kubenswrapper[4869]: I0127 09:54:37.935595 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:37Z","lastTransitionTime":"2026-01-27T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.001344 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 08:23:19.636740013 +0000 UTC Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.033481 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.033524 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.033532 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:38 crc kubenswrapper[4869]: E0127 09:54:38.034230 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.033521 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:38 crc kubenswrapper[4869]: E0127 09:54:38.034359 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:38 crc kubenswrapper[4869]: E0127 09:54:38.034481 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:54:38 crc kubenswrapper[4869]: E0127 09:54:38.034645 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.038796 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.038896 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.038920 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.038954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.038975 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:38Z","lastTransitionTime":"2026-01-27T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.142986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.143051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.143134 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.143202 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.143225 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:38Z","lastTransitionTime":"2026-01-27T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.246333 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.246375 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.246388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.246405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.246419 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:38Z","lastTransitionTime":"2026-01-27T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.348799 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.348858 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.348870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.348887 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.348898 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:38Z","lastTransitionTime":"2026-01-27T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.451427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.451502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.451525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.451555 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.451611 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:38Z","lastTransitionTime":"2026-01-27T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.554228 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.554265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.554274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.554291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.554299 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:38Z","lastTransitionTime":"2026-01-27T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.656719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.656761 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.656774 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.656789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.656803 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:38Z","lastTransitionTime":"2026-01-27T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.705149 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.705191 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.705199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.705211 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.705221 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:38Z","lastTransitionTime":"2026-01-27T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:38 crc kubenswrapper[4869]: E0127 09:54:38.717487 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:38Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.721336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.721372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.721383 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.721398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.721411 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:38Z","lastTransitionTime":"2026-01-27T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:38 crc kubenswrapper[4869]: E0127 09:54:38.736612 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:38Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.741742 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.741776 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.741785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.741799 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.741808 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:38Z","lastTransitionTime":"2026-01-27T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:38 crc kubenswrapper[4869]: E0127 09:54:38.752747 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:38Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.756505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.756557 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.756577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.756601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.756619 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:38Z","lastTransitionTime":"2026-01-27T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:38 crc kubenswrapper[4869]: E0127 09:54:38.772405 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:38Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.776822 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.776874 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.776884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.776897 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.776906 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:38Z","lastTransitionTime":"2026-01-27T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:38 crc kubenswrapper[4869]: E0127 09:54:38.788719 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:38Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:38 crc kubenswrapper[4869]: E0127 09:54:38.788841 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.790824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.790858 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.790867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.790880 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.790888 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:38Z","lastTransitionTime":"2026-01-27T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.893317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.893377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.893396 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.893418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.893433 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:38Z","lastTransitionTime":"2026-01-27T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.996279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.996529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.996659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.996758 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:38 crc kubenswrapper[4869]: I0127 09:54:38.996866 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:38Z","lastTransitionTime":"2026-01-27T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.001855 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 23:02:20.319026949 +0000 UTC Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.099587 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.099681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.099699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.099722 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.099738 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:39Z","lastTransitionTime":"2026-01-27T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.203440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.203497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.203506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.203521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.203530 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:39Z","lastTransitionTime":"2026-01-27T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.306898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.306975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.306998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.307024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.307042 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:39Z","lastTransitionTime":"2026-01-27T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.410484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.410546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.410564 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.410589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.410607 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:39Z","lastTransitionTime":"2026-01-27T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.514944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.515012 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.515037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.515079 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.515101 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:39Z","lastTransitionTime":"2026-01-27T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.618398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.618466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.618491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.618525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.618548 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:39Z","lastTransitionTime":"2026-01-27T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.721971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.722037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.722061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.722090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.722111 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:39Z","lastTransitionTime":"2026-01-27T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.825111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.825155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.825170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.825189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.825202 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:39Z","lastTransitionTime":"2026-01-27T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.927720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.927807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.927880 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.927914 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:39 crc kubenswrapper[4869]: I0127 09:54:39.927936 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:39Z","lastTransitionTime":"2026-01-27T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.002450 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 08:33:34.001964532 +0000 UTC Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.031150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.031195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.031211 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.031233 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.031248 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:40Z","lastTransitionTime":"2026-01-27T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.032233 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.032298 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.032302 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:40 crc kubenswrapper[4869]: E0127 09:54:40.032374 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.032391 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:40 crc kubenswrapper[4869]: E0127 09:54:40.032568 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:54:40 crc kubenswrapper[4869]: E0127 09:54:40.032674 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:40 crc kubenswrapper[4869]: E0127 09:54:40.032776 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.133760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.133790 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.133799 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.133812 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.133821 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:40Z","lastTransitionTime":"2026-01-27T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.236504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.236548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.236561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.236576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.236592 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:40Z","lastTransitionTime":"2026-01-27T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.340140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.340173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.340182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.340195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.340205 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:40Z","lastTransitionTime":"2026-01-27T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.442575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.442612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.442620 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.442639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.442648 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:40Z","lastTransitionTime":"2026-01-27T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.545314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.545370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.545386 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.545407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.545422 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:40Z","lastTransitionTime":"2026-01-27T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.648906 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.648983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.649009 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.649033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.649050 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:40Z","lastTransitionTime":"2026-01-27T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.752349 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.752390 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.752398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.752416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.752426 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:40Z","lastTransitionTime":"2026-01-27T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.855407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.855454 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.855471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.855493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.855510 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:40Z","lastTransitionTime":"2026-01-27T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.958532 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.958589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.958606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.958632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:40 crc kubenswrapper[4869]: I0127 09:54:40.958697 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:40Z","lastTransitionTime":"2026-01-27T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.003318 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:22:16.195385962 +0000 UTC Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.061885 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.061962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.061975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.061999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.062021 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:41Z","lastTransitionTime":"2026-01-27T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.165365 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.165413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.165429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.165447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.165463 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:41Z","lastTransitionTime":"2026-01-27T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.268390 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.268457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.268474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.268499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.268517 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:41Z","lastTransitionTime":"2026-01-27T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.372068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.372136 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.372154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.372179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.372199 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:41Z","lastTransitionTime":"2026-01-27T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.474744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.474816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.474873 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.474907 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.474929 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:41Z","lastTransitionTime":"2026-01-27T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.578243 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.578314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.578343 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.578372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.578390 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:41Z","lastTransitionTime":"2026-01-27T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.681973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.682021 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.682035 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.682054 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.682071 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:41Z","lastTransitionTime":"2026-01-27T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.707091 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs\") pod \"network-metrics-daemon-p5frm\" (UID: \"0bf72cba-f163-4dc2-b157-cfeb56d0177b\") " pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:41 crc kubenswrapper[4869]: E0127 09:54:41.707246 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 09:54:41 crc kubenswrapper[4869]: E0127 09:54:41.707319 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs podName:0bf72cba-f163-4dc2-b157-cfeb56d0177b nodeName:}" failed. No retries permitted until 2026-01-27 09:54:57.707294229 +0000 UTC m=+66.327718322 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs") pod "network-metrics-daemon-p5frm" (UID: "0bf72cba-f163-4dc2-b157-cfeb56d0177b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.785317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.785358 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.785367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.785382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.785392 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:41Z","lastTransitionTime":"2026-01-27T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.888640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.888704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.888717 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.888735 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.888749 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:41Z","lastTransitionTime":"2026-01-27T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.992051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.992111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.992131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.992158 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:41 crc kubenswrapper[4869]: I0127 09:54:41.992176 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:41Z","lastTransitionTime":"2026-01-27T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.003448 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:43:59.789167172 +0000 UTC Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.032193 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.032299 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:42 crc kubenswrapper[4869]: E0127 09:54:42.032428 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.032462 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.032241 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:42 crc kubenswrapper[4869]: E0127 09:54:42.032760 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:54:42 crc kubenswrapper[4869]: E0127 09:54:42.033094 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:42 crc kubenswrapper[4869]: E0127 09:54:42.033262 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.054398 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:42Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.069168 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:42Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.093889 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:42Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.094494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.094533 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.094580 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.094598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.094608 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:42Z","lastTransitionTime":"2026-01-27T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.109988 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:42Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.133684 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:42Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.153157 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:42Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.169117 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:42Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.182671 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:42Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.195239 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:42Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.196779 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.196989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.197017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.197038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.197053 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:42Z","lastTransitionTime":"2026-01-27T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.209092 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:42Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.225314 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:42Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.246974 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:42Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.265212 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:42Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.281635 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:42Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.299115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.299168 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.299183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.299204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.299218 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:42Z","lastTransitionTime":"2026-01-27T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.305101 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:23Z\\\",\\\"message\\\":\\\", UUID:\\\\\\\"97419c58-41c7-41d7-a137-a446f0c7eeb3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.138\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 09:54:23.207083 6287 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 09:54:23.207145 6287 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:36Z\\\",\\\"message\\\":\\\"p_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 09:54:36.147597 6525 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0127 09:54:36.147615 6525 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 2.274197ms\\\\nI0127 09:54:36.147635 6525 services_controller.go:356] Processing sync for service openshift-etcd/etcd for network=default\\\\nI0127 09:54:36.147646 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 09:54:36.147453 6525 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0127 09:54:36.147731 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:42Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.317132 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:42Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.338338 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:42Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.401472 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.401502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.401510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.401523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.401532 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:42Z","lastTransitionTime":"2026-01-27T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.504493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.504536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.504548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.504566 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.504578 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:42Z","lastTransitionTime":"2026-01-27T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.606924 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.606959 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.606967 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.606981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.606990 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:42Z","lastTransitionTime":"2026-01-27T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.709707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.709791 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.709800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.714220 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.714237 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:42Z","lastTransitionTime":"2026-01-27T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.817349 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.817408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.817422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.817440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.817449 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:42Z","lastTransitionTime":"2026-01-27T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.920255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.920277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.920285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.920297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:42 crc kubenswrapper[4869]: I0127 09:54:42.920305 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:42Z","lastTransitionTime":"2026-01-27T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.004211 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 04:48:34.064391917 +0000 UTC Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.022100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.022160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.022170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.022185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.022197 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:43Z","lastTransitionTime":"2026-01-27T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.125167 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.125217 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.125234 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.125258 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.125276 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:43Z","lastTransitionTime":"2026-01-27T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.227531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.227572 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.227584 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.227601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.227613 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:43Z","lastTransitionTime":"2026-01-27T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.329861 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.329912 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.329940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.329963 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.329979 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:43Z","lastTransitionTime":"2026-01-27T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.432677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.432737 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.432754 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.432779 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.432796 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:43Z","lastTransitionTime":"2026-01-27T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.534623 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.534653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.534661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.534673 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.534682 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:43Z","lastTransitionTime":"2026-01-27T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.637334 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.637378 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.637389 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.637405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.637415 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:43Z","lastTransitionTime":"2026-01-27T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.739697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.739762 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.739771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.739784 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.739793 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:43Z","lastTransitionTime":"2026-01-27T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.828693 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.828808 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:43 crc kubenswrapper[4869]: E0127 09:54:43.828886 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:55:15.828851307 +0000 UTC m=+84.449275400 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:54:43 crc kubenswrapper[4869]: E0127 09:54:43.828925 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.828994 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:43 crc kubenswrapper[4869]: E0127 09:54:43.829002 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 09:55:15.828985871 +0000 UTC m=+84.449409964 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.829055 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:43 crc kubenswrapper[4869]: E0127 09:54:43.829151 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 09:54:43 crc kubenswrapper[4869]: E0127 09:54:43.829184 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 09:54:43 crc kubenswrapper[4869]: E0127 09:54:43.829203 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 09:54:43 crc kubenswrapper[4869]: E0127 09:54:43.829207 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 09:55:15.829197827 +0000 UTC m=+84.449621910 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 09:54:43 crc kubenswrapper[4869]: E0127 09:54:43.829216 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:43 crc kubenswrapper[4869]: E0127 09:54:43.829250 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 09:55:15.829240288 +0000 UTC m=+84.449664371 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.841984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.842030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.842040 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.842057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.842069 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:43Z","lastTransitionTime":"2026-01-27T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.930374 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:43 crc kubenswrapper[4869]: E0127 09:54:43.930530 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 09:54:43 crc kubenswrapper[4869]: E0127 09:54:43.930560 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 09:54:43 crc kubenswrapper[4869]: E0127 09:54:43.930575 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:43 crc kubenswrapper[4869]: E0127 09:54:43.930642 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 09:55:15.930623267 +0000 UTC m=+84.551047370 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.945176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.945229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.945240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.945256 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:43 crc kubenswrapper[4869]: I0127 09:54:43.945269 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:43Z","lastTransitionTime":"2026-01-27T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.005173 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:20:56.29741291 +0000 UTC Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.032226 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.032285 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:44 crc kubenswrapper[4869]: E0127 09:54:44.032359 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.032247 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:44 crc kubenswrapper[4869]: E0127 09:54:44.032459 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.032428 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:44 crc kubenswrapper[4869]: E0127 09:54:44.032662 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:44 crc kubenswrapper[4869]: E0127 09:54:44.032809 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.047401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.047439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.047448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.047462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.047472 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:44Z","lastTransitionTime":"2026-01-27T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.150220 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.150284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.150304 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.150330 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.150348 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:44Z","lastTransitionTime":"2026-01-27T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.253931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.253973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.253984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.253999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.254011 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:44Z","lastTransitionTime":"2026-01-27T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.356690 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.356739 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.356750 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.356765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.356778 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:44Z","lastTransitionTime":"2026-01-27T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.459742 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.459782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.459790 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.459805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.459814 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:44Z","lastTransitionTime":"2026-01-27T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.561799 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.561867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.561882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.561896 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.561906 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:44Z","lastTransitionTime":"2026-01-27T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.664566 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.664604 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.664615 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.664647 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.664659 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:44Z","lastTransitionTime":"2026-01-27T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.766966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.766997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.767005 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.767017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.767025 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:44Z","lastTransitionTime":"2026-01-27T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.869823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.869914 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.869931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.869955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.869972 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:44Z","lastTransitionTime":"2026-01-27T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.972053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.972085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.972097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.972115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:44 crc kubenswrapper[4869]: I0127 09:54:44.972126 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:44Z","lastTransitionTime":"2026-01-27T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.005722 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 23:40:56.904652158 +0000 UTC Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.074803 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.074857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.074870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.074885 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.074896 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:45Z","lastTransitionTime":"2026-01-27T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.177658 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.177691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.177703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.177715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.177726 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:45Z","lastTransitionTime":"2026-01-27T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.280426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.280470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.280479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.280493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.280502 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:45Z","lastTransitionTime":"2026-01-27T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.382692 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.382749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.382759 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.382775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.382807 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:45Z","lastTransitionTime":"2026-01-27T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.485970 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.486030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.486052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.486074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.486092 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:45Z","lastTransitionTime":"2026-01-27T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.588927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.588965 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.588977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.588993 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.589008 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:45Z","lastTransitionTime":"2026-01-27T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.692818 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.693015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.693031 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.693049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.693061 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:45Z","lastTransitionTime":"2026-01-27T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.796150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.796192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.796204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.796223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.796235 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:45Z","lastTransitionTime":"2026-01-27T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.818129 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.827375 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.831717 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:45Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.845714 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:45Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.855334 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:45Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.865225 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:45Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.878100 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:45Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.895996 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:45Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.899923 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.899974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.899987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.900003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.900016 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:45Z","lastTransitionTime":"2026-01-27T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.912740 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:45Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.926263 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:45Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.947723 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:23Z\\\",\\\"message\\\":\\\", UUID:\\\\\\\"97419c58-41c7-41d7-a137-a446f0c7eeb3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.138\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 09:54:23.207083 6287 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 09:54:23.207145 6287 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:36Z\\\",\\\"message\\\":\\\"p_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 09:54:36.147597 6525 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0127 09:54:36.147615 6525 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 2.274197ms\\\\nI0127 09:54:36.147635 6525 services_controller.go:356] Processing sync for service openshift-etcd/etcd for network=default\\\\nI0127 09:54:36.147646 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 09:54:36.147453 6525 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0127 09:54:36.147731 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:45Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.959211 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:45Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.977098 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:45Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.989591 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:45Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:45 crc kubenswrapper[4869]: I0127 09:54:45.999148 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:45Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.002492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.002518 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.002528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.002542 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.002552 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:46Z","lastTransitionTime":"2026-01-27T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.006292 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 13:20:49.423070081 +0000 UTC Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.012597 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.021214 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.030766 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.032202 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.032238 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.032225 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.032202 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:46 crc kubenswrapper[4869]: E0127 09:54:46.032322 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:46 crc kubenswrapper[4869]: E0127 09:54:46.032417 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:54:46 crc kubenswrapper[4869]: E0127 09:54:46.032461 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:46 crc kubenswrapper[4869]: E0127 09:54:46.032576 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.046287 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:46Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.104269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.104313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.104326 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.104342 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.104353 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:46Z","lastTransitionTime":"2026-01-27T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.206434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.206476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.206504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.206521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.206530 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:46Z","lastTransitionTime":"2026-01-27T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.308524 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.308570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.308585 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.308608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.308624 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:46Z","lastTransitionTime":"2026-01-27T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.411669 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.411721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.411732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.411748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.411758 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:46Z","lastTransitionTime":"2026-01-27T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.514456 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.514493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.514502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.514517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.514528 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:46Z","lastTransitionTime":"2026-01-27T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.616198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.616443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.616514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.616601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.616682 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:46Z","lastTransitionTime":"2026-01-27T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.720052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.720440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.720597 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.720754 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.720941 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:46Z","lastTransitionTime":"2026-01-27T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.824132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.824174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.824184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.824199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.824209 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:46Z","lastTransitionTime":"2026-01-27T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.927909 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.927955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.927973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.927995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:46 crc kubenswrapper[4869]: I0127 09:54:46.928009 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:46Z","lastTransitionTime":"2026-01-27T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.006895 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 01:30:28.104563903 +0000 UTC Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.030710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.030740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.030749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.030765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.030776 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:47Z","lastTransitionTime":"2026-01-27T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.133745 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.134019 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.134089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.134167 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.134228 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:47Z","lastTransitionTime":"2026-01-27T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.236759 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.237033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.237105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.237184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.237260 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:47Z","lastTransitionTime":"2026-01-27T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.339467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.339953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.340129 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.340319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.340519 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:47Z","lastTransitionTime":"2026-01-27T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.443611 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.443672 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.443682 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.443698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.443709 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:47Z","lastTransitionTime":"2026-01-27T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.546709 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.546754 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.546769 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.546787 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.546801 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:47Z","lastTransitionTime":"2026-01-27T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.649528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.649563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.649574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.649591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.649602 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:47Z","lastTransitionTime":"2026-01-27T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.751784 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.751892 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.751903 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.751917 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.751929 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:47Z","lastTransitionTime":"2026-01-27T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.854619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.854661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.854671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.854687 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.854698 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:47Z","lastTransitionTime":"2026-01-27T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.957529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.957564 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.957574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.957589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:47 crc kubenswrapper[4869]: I0127 09:54:47.957598 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:47Z","lastTransitionTime":"2026-01-27T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.008040 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 01:41:29.406334943 +0000 UTC Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.032681 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.032794 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:48 crc kubenswrapper[4869]: E0127 09:54:48.032926 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.032873 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:48 crc kubenswrapper[4869]: E0127 09:54:48.033079 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.032704 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:48 crc kubenswrapper[4869]: E0127 09:54:48.033259 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:54:48 crc kubenswrapper[4869]: E0127 09:54:48.033190 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.059472 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.059510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.059532 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.059552 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.059566 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:48Z","lastTransitionTime":"2026-01-27T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.162742 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.162867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.162894 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.162923 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.162946 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:48Z","lastTransitionTime":"2026-01-27T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.232787 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.258167 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.265743 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.265780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.265790 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.265808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.265821 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:48Z","lastTransitionTime":"2026-01-27T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.272565 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b34cd5aa-e234-4132-a206-ee911234e4fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d421b97e5f8a27808a726111b6512ca6beb22600f7ce6b0d6b181c0c9a94c269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2175d560bd3b49088520c674e6668143955bdbeb0c8fc99c8186146ab4b733e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75205c7a7ad7b74b6a4c04b1f29c57d66e8899e41700cff45fbdcbc162a251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.291349 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.322205 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22a3bb16e18362a478c9aff77d70255ea9fd957b209a36b6e61a40d8a29527d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:23Z\\\",\\\"message\\\":\\\", UUID:\\\\\\\"97419c58-41c7-41d7-a137-a446f0c7eeb3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/redhat-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.138\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 09:54:23.207083 6287 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 09:54:23.207145 6287 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:36Z\\\",\\\"message\\\":\\\"p_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 09:54:36.147597 6525 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0127 09:54:36.147615 6525 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 2.274197ms\\\\nI0127 09:54:36.147635 6525 services_controller.go:356] Processing sync for service openshift-etcd/etcd for network=default\\\\nI0127 09:54:36.147646 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 09:54:36.147453 6525 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0127 09:54:36.147731 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.336414 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.349110 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.362583 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.367660 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.367699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.367707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.367722 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.367731 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:48Z","lastTransitionTime":"2026-01-27T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.376166 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.389378 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.407230 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.422021 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.439342 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.452928 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.466722 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.471106 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.471151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.471161 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.471175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.471185 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:48Z","lastTransitionTime":"2026-01-27T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.481457 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.492533 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.502744 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.516261 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:48Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.574181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.574231 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.574244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.574259 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.574267 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:48Z","lastTransitionTime":"2026-01-27T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.677786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.677878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.677893 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.677916 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.677930 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:48Z","lastTransitionTime":"2026-01-27T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.781283 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.781325 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.781339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.781355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.781365 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:48Z","lastTransitionTime":"2026-01-27T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.885356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.885435 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.885456 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.885482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.885501 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:48Z","lastTransitionTime":"2026-01-27T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.989385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.989431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.989448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.989465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:48 crc kubenswrapper[4869]: I0127 09:54:48.989476 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:48Z","lastTransitionTime":"2026-01-27T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.009020 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:39:12.611567792 +0000 UTC Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.093458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.093496 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.093507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.093525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.093536 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:49Z","lastTransitionTime":"2026-01-27T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.108192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.108231 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.108242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.108258 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.108268 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:49Z","lastTransitionTime":"2026-01-27T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:49 crc kubenswrapper[4869]: E0127 09:54:49.123645 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:49Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.128539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.128598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.128616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.128643 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.128660 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:49Z","lastTransitionTime":"2026-01-27T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:49 crc kubenswrapper[4869]: E0127 09:54:49.147528 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:49Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.152175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.152250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.152272 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.152299 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.152324 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:49Z","lastTransitionTime":"2026-01-27T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:49 crc kubenswrapper[4869]: E0127 09:54:49.168128 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:49Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.173480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.173932 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.174077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.174216 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.174313 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:49Z","lastTransitionTime":"2026-01-27T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:49 crc kubenswrapper[4869]: E0127 09:54:49.195475 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:49Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.200080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.200166 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.200185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.200241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.200260 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:49Z","lastTransitionTime":"2026-01-27T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:49 crc kubenswrapper[4869]: E0127 09:54:49.214677 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:49Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:49 crc kubenswrapper[4869]: E0127 09:54:49.214949 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.216706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.216787 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.216815 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.216897 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.217011 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:49Z","lastTransitionTime":"2026-01-27T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.321727 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.321777 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.321789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.321808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.321820 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:49Z","lastTransitionTime":"2026-01-27T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.424506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.424563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.424580 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.424603 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.424620 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:49Z","lastTransitionTime":"2026-01-27T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.527263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.527312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.527327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.527343 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.527352 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:49Z","lastTransitionTime":"2026-01-27T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.631301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.631367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.631385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.631406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.631423 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:49Z","lastTransitionTime":"2026-01-27T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.734590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.734941 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.735029 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.735093 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.735164 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:49Z","lastTransitionTime":"2026-01-27T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.838195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.838251 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.838260 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.838274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.838285 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:49Z","lastTransitionTime":"2026-01-27T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.940490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.940521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.940529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.940541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:49 crc kubenswrapper[4869]: I0127 09:54:49.940550 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:49Z","lastTransitionTime":"2026-01-27T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.009925 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 15:42:47.456539045 +0000 UTC Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.032225 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.032261 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.032262 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:50 crc kubenswrapper[4869]: E0127 09:54:50.032367 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.032225 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:50 crc kubenswrapper[4869]: E0127 09:54:50.032457 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:50 crc kubenswrapper[4869]: E0127 09:54:50.032519 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:50 crc kubenswrapper[4869]: E0127 09:54:50.032562 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.042021 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.042078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.042090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.042119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.042129 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:50Z","lastTransitionTime":"2026-01-27T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.144053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.144124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.144135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.144169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.144180 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:50Z","lastTransitionTime":"2026-01-27T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.245809 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.245924 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.245942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.245964 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.245975 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:50Z","lastTransitionTime":"2026-01-27T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.348240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.348272 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.348281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.348295 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.348306 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:50Z","lastTransitionTime":"2026-01-27T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.450497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.450528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.450537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.450551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.450562 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:50Z","lastTransitionTime":"2026-01-27T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.552683 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.552738 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.552748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.552760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.552769 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:50Z","lastTransitionTime":"2026-01-27T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.655463 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.655507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.655519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.655535 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.655547 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:50Z","lastTransitionTime":"2026-01-27T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.757584 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.757627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.757635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.757646 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.757654 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:50Z","lastTransitionTime":"2026-01-27T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.860166 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.860224 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.860235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.860250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.860259 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:50Z","lastTransitionTime":"2026-01-27T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.962591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.962674 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.962689 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.962963 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:50 crc kubenswrapper[4869]: I0127 09:54:50.962981 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:50Z","lastTransitionTime":"2026-01-27T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.010293 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:49:19.312143361 +0000 UTC Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.033240 4869 scope.go:117] "RemoveContainer" containerID="e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89" Jan 27 09:54:51 crc kubenswrapper[4869]: E0127 09:54:51.033412 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-45hzs_openshift-ovn-kubernetes(8d38c693-da40-464a-9822-f98fb1b5ca35)\"" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.054873 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.066345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.066373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.066382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.066396 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.066405 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:51Z","lastTransitionTime":"2026-01-27T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.067084 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.097643 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.109014 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b34cd5aa-e234-4132-a206-ee911234e4fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d421b97e5f8a27808a726111b6512ca6beb22600f7ce6b0d6b181c0c9a94c269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2175d560bd3b49088520c674e6668143955bdbeb0c8fc99c8186146ab4b733e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75205c7a7ad7b74b6a4c04b1f29c57d66e8899e41700cff45fbdcbc162a251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.121556 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.144143 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:36Z\\\",\\\"message\\\":\\\"p_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 09:54:36.147597 6525 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0127 09:54:36.147615 6525 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 2.274197ms\\\\nI0127 09:54:36.147635 6525 services_controller.go:356] Processing sync for service openshift-etcd/etcd for network=default\\\\nI0127 09:54:36.147646 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 09:54:36.147453 6525 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0127 09:54:36.147731 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-45hzs_openshift-ovn-kubernetes(8d38c693-da40-464a-9822-f98fb1b5ca35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.161602 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.168499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.168677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.168762 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.168877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.168963 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:51Z","lastTransitionTime":"2026-01-27T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.175537 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.193359 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.212936 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.223969 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.233328 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.248033 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.262858 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.271423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.271478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.271490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.271505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.271514 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:51Z","lastTransitionTime":"2026-01-27T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.275371 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.292739 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.304377 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.323843 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:51Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.373795 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.373857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.373867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.373880 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.373892 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:51Z","lastTransitionTime":"2026-01-27T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.475575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.475952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.476030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.476096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.476162 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:51Z","lastTransitionTime":"2026-01-27T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.582325 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.582366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.582376 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.582417 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.582428 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:51Z","lastTransitionTime":"2026-01-27T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.684901 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.684939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.684949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.684988 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.685004 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:51Z","lastTransitionTime":"2026-01-27T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.786987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.787034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.787043 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.787058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.787068 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:51Z","lastTransitionTime":"2026-01-27T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.890231 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.890280 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.890292 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.890311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.890323 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:51Z","lastTransitionTime":"2026-01-27T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.992318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.992349 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.992357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.992369 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:51 crc kubenswrapper[4869]: I0127 09:54:51.992377 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:51Z","lastTransitionTime":"2026-01-27T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.011102 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:03:11.327382327 +0000 UTC Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.032081 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.032126 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.032145 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:52 crc kubenswrapper[4869]: E0127 09:54:52.032189 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.032083 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:52 crc kubenswrapper[4869]: E0127 09:54:52.032309 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:52 crc kubenswrapper[4869]: E0127 09:54:52.032358 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:54:52 crc kubenswrapper[4869]: E0127 09:54:52.032397 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.044229 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.057552 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.069241 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.082625 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.094762 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.094796 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.094805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.094819 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.094844 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:52Z","lastTransitionTime":"2026-01-27T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.105716 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.114824 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.128353 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.140730 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.153176 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.163915 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.173861 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.182897 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.194898 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.201090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.201337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.201940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.202019 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.202078 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:52Z","lastTransitionTime":"2026-01-27T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.216563 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.229176 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b34cd5aa-e234-4132-a206-ee911234e4fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d421b97e5f8a27808a726111b6512ca6beb22600f7ce6b0d6b181c0c9a94c269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2175d560bd3b49088520c674e6668143955bdbeb0c8fc99c8186146ab4b733e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75205c7a7ad7b74b6a4c04b1f29c57d66e8899e41700cff45fbdcbc162a251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.242350 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.262511 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:36Z\\\",\\\"message\\\":\\\"p_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 09:54:36.147597 6525 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0127 09:54:36.147615 6525 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 2.274197ms\\\\nI0127 09:54:36.147635 6525 services_controller.go:356] Processing sync for service openshift-etcd/etcd for network=default\\\\nI0127 09:54:36.147646 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 09:54:36.147453 6525 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0127 09:54:36.147731 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-45hzs_openshift-ovn-kubernetes(8d38c693-da40-464a-9822-f98fb1b5ca35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.272610 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:52Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.304132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.304426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.304567 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.304699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.304825 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:52Z","lastTransitionTime":"2026-01-27T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.406620 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.406659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.406671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.406686 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.406699 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:52Z","lastTransitionTime":"2026-01-27T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.509488 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.509748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.509821 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.509927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.509993 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:52Z","lastTransitionTime":"2026-01-27T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.611779 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.612137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.612238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.612338 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.612413 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:52Z","lastTransitionTime":"2026-01-27T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.715479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.715547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.715570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.715602 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.715623 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:52Z","lastTransitionTime":"2026-01-27T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.818204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.818243 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.818255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.818272 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.818285 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:52Z","lastTransitionTime":"2026-01-27T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.920265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.920312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.920324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.920344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:52 crc kubenswrapper[4869]: I0127 09:54:52.920356 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:52Z","lastTransitionTime":"2026-01-27T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.012066 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 07:46:54.762069133 +0000 UTC Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.023196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.023250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.023271 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.023295 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.023316 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:53Z","lastTransitionTime":"2026-01-27T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.125824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.125912 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.125929 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.125951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.125968 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:53Z","lastTransitionTime":"2026-01-27T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.228109 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.228184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.228207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.228237 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.228260 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:53Z","lastTransitionTime":"2026-01-27T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.331120 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.331468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.331684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.331912 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.332104 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:53Z","lastTransitionTime":"2026-01-27T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.434165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.434244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.434265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.434291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.434309 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:53Z","lastTransitionTime":"2026-01-27T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.537732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.537763 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.537773 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.537789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.537798 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:53Z","lastTransitionTime":"2026-01-27T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.640345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.640384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.640393 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.640407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.640415 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:53Z","lastTransitionTime":"2026-01-27T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.743045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.743341 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.743440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.743554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.743651 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:53Z","lastTransitionTime":"2026-01-27T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.845856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.846196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.846298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.846402 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.846498 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:53Z","lastTransitionTime":"2026-01-27T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.949728 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.949766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.949777 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.949794 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:53 crc kubenswrapper[4869]: I0127 09:54:53.949805 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:53Z","lastTransitionTime":"2026-01-27T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.013204 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:29:46.981554327 +0000 UTC Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.032287 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.032350 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.032449 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.032515 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:54 crc kubenswrapper[4869]: E0127 09:54:54.033097 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:54 crc kubenswrapper[4869]: E0127 09:54:54.033244 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:54 crc kubenswrapper[4869]: E0127 09:54:54.033407 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:54 crc kubenswrapper[4869]: E0127 09:54:54.033526 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.051928 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.052280 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.052476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.052666 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.052826 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:54Z","lastTransitionTime":"2026-01-27T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.155871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.155896 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.155904 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.155916 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.155924 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:54Z","lastTransitionTime":"2026-01-27T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.258992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.259027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.259037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.259051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.259063 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:54Z","lastTransitionTime":"2026-01-27T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.361733 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.361763 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.361771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.361783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.361792 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:54Z","lastTransitionTime":"2026-01-27T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.463916 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.464070 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.464155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.464222 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.464311 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:54Z","lastTransitionTime":"2026-01-27T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.566433 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.566466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.566476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.566489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.566498 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:54Z","lastTransitionTime":"2026-01-27T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.669548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.669597 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.669610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.669629 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.669642 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:54Z","lastTransitionTime":"2026-01-27T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.771723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.771767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.771781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.771800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.771814 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:54Z","lastTransitionTime":"2026-01-27T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.874613 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.874659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.874672 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.874690 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.874703 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:54Z","lastTransitionTime":"2026-01-27T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.977848 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.977946 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.977968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.978068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:54 crc kubenswrapper[4869]: I0127 09:54:54.978138 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:54Z","lastTransitionTime":"2026-01-27T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.013601 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:14:09.419409338 +0000 UTC Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.080091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.080116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.080125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.080137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.080147 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:55Z","lastTransitionTime":"2026-01-27T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.182009 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.182046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.182060 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.182074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.182083 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:55Z","lastTransitionTime":"2026-01-27T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.286685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.286729 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.286740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.286755 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.286768 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:55Z","lastTransitionTime":"2026-01-27T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.389311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.389349 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.389359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.389375 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.389386 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:55Z","lastTransitionTime":"2026-01-27T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.490866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.490902 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.490912 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.490927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.490937 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:55Z","lastTransitionTime":"2026-01-27T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.592917 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.592956 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.592964 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.592978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.592987 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:55Z","lastTransitionTime":"2026-01-27T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.695999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.696038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.696049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.696064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.696075 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:55Z","lastTransitionTime":"2026-01-27T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.798869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.798904 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.798919 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.798935 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.798946 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:55Z","lastTransitionTime":"2026-01-27T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.901279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.901643 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.901875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.902163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:55 crc kubenswrapper[4869]: I0127 09:54:55.902361 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:55Z","lastTransitionTime":"2026-01-27T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.005386 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.005415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.005423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.005435 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.005445 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:56Z","lastTransitionTime":"2026-01-27T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.014153 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 14:29:42.717779579 +0000 UTC Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.033549 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:56 crc kubenswrapper[4869]: E0127 09:54:56.033637 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.033711 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.033736 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:56 crc kubenswrapper[4869]: E0127 09:54:56.033901 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.034134 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:56 crc kubenswrapper[4869]: E0127 09:54:56.034350 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:56 crc kubenswrapper[4869]: E0127 09:54:56.034439 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.109348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.109378 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.109399 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.109414 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.109424 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:56Z","lastTransitionTime":"2026-01-27T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.211651 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.211688 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.211697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.211712 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.211721 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:56Z","lastTransitionTime":"2026-01-27T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.314101 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.314132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.314141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.314154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.314162 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:56Z","lastTransitionTime":"2026-01-27T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.416480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.416504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.416512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.416526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.416534 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:56Z","lastTransitionTime":"2026-01-27T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.518357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.518389 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.518398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.518414 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.518425 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:56Z","lastTransitionTime":"2026-01-27T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.621116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.621160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.621170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.621185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.621194 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:56Z","lastTransitionTime":"2026-01-27T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.723610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.723640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.723649 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.723662 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.723670 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:56Z","lastTransitionTime":"2026-01-27T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.827095 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.827364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.827468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.827556 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.827643 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:56Z","lastTransitionTime":"2026-01-27T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.931729 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.931765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.931774 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.931788 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:56 crc kubenswrapper[4869]: I0127 09:54:56.931797 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:56Z","lastTransitionTime":"2026-01-27T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.014537 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 09:32:13.37002846 +0000 UTC Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.033378 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.033594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.033686 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.033771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.033874 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:57Z","lastTransitionTime":"2026-01-27T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.136166 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.136196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.136204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.136218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.136227 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:57Z","lastTransitionTime":"2026-01-27T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.238499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.238539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.238548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.238562 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.238571 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:57Z","lastTransitionTime":"2026-01-27T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.340948 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.341026 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.341040 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.341057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.341068 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:57Z","lastTransitionTime":"2026-01-27T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.443432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.443462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.443471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.443484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.443495 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:57Z","lastTransitionTime":"2026-01-27T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.545281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.545323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.545335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.545351 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.545363 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:57Z","lastTransitionTime":"2026-01-27T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.648656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.648698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.648707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.648723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.648733 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:57Z","lastTransitionTime":"2026-01-27T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.751759 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.751825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.751860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.751879 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.751892 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:57Z","lastTransitionTime":"2026-01-27T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.755338 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs\") pod \"network-metrics-daemon-p5frm\" (UID: \"0bf72cba-f163-4dc2-b157-cfeb56d0177b\") " pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:57 crc kubenswrapper[4869]: E0127 09:54:57.755473 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 09:54:57 crc kubenswrapper[4869]: E0127 09:54:57.755549 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs podName:0bf72cba-f163-4dc2-b157-cfeb56d0177b nodeName:}" failed. No retries permitted until 2026-01-27 09:55:29.755530055 +0000 UTC m=+98.375954148 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs") pod "network-metrics-daemon-p5frm" (UID: "0bf72cba-f163-4dc2-b157-cfeb56d0177b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.854775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.854872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.854890 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.854917 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.854935 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:57Z","lastTransitionTime":"2026-01-27T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.957269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.957310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.957319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.957332 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:57 crc kubenswrapper[4869]: I0127 09:54:57.957342 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:57Z","lastTransitionTime":"2026-01-27T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.015576 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 09:32:07.508993006 +0000 UTC Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.033019 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.033076 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.033072 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.033036 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:54:58 crc kubenswrapper[4869]: E0127 09:54:58.033191 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:54:58 crc kubenswrapper[4869]: E0127 09:54:58.033299 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:54:58 crc kubenswrapper[4869]: E0127 09:54:58.033365 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:54:58 crc kubenswrapper[4869]: E0127 09:54:58.033543 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.059480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.059524 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.059535 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.059554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.059567 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:58Z","lastTransitionTime":"2026-01-27T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.162403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.162468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.162483 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.162501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.162512 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:58Z","lastTransitionTime":"2026-01-27T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.264799 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.264852 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.264864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.264880 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.264889 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:58Z","lastTransitionTime":"2026-01-27T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.367465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.367499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.367507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.367519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.367528 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:58Z","lastTransitionTime":"2026-01-27T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.469735 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.469812 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.469870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.469934 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.469957 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:58Z","lastTransitionTime":"2026-01-27T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.571949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.571986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.571998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.572011 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.572020 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:58Z","lastTransitionTime":"2026-01-27T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.674668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.674713 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.674724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.674737 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.674748 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:58Z","lastTransitionTime":"2026-01-27T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.777351 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.777383 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.777413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.777430 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.777439 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:58Z","lastTransitionTime":"2026-01-27T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.880009 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.880052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.880063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.880081 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.880094 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:58Z","lastTransitionTime":"2026-01-27T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.982197 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.982242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.982252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.982264 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:58 crc kubenswrapper[4869]: I0127 09:54:58.982273 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:58Z","lastTransitionTime":"2026-01-27T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.016660 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:45:05.396053359 +0000 UTC Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.084221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.084258 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.084270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.084286 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.084300 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:59Z","lastTransitionTime":"2026-01-27T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.186719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.186760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.186771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.186784 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.186793 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:59Z","lastTransitionTime":"2026-01-27T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.289281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.289318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.289326 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.289339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.289348 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:59Z","lastTransitionTime":"2026-01-27T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.391465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.391505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.391516 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.391537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.391548 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:59Z","lastTransitionTime":"2026-01-27T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:59 crc kubenswrapper[4869]: E0127 09:54:59.402989 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.411272 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.411336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.411359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.411389 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.411409 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:59Z","lastTransitionTime":"2026-01-27T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:59 crc kubenswrapper[4869]: E0127 09:54:59.425698 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.429491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.429527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.429537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.429551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.429561 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:59Z","lastTransitionTime":"2026-01-27T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:59 crc kubenswrapper[4869]: E0127 09:54:59.441870 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.445347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.445404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.445421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.445445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.445462 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:59Z","lastTransitionTime":"2026-01-27T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:59 crc kubenswrapper[4869]: E0127 09:54:59.457258 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.460804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.460971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.460986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.461001 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.461011 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:59Z","lastTransitionTime":"2026-01-27T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:59 crc kubenswrapper[4869]: E0127 09:54:59.471885 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: E0127 09:54:59.472020 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.473631 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.473660 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.473669 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.473682 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.473692 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:59Z","lastTransitionTime":"2026-01-27T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.484655 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xj5gd_c4e8dfa0-1849-457a-b564-4f77e534a7e0/kube-multus/0.log" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.484737 4869 generic.go:334] "Generic (PLEG): container finished" podID="c4e8dfa0-1849-457a-b564-4f77e534a7e0" containerID="510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a" exitCode=1 Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.484778 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xj5gd" event={"ID":"c4e8dfa0-1849-457a-b564-4f77e534a7e0","Type":"ContainerDied","Data":"510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a"} Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.486418 4869 scope.go:117] "RemoveContainer" containerID="510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.496710 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.509632 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.522121 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.534385 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.545241 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.555216 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.566101 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.575711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.575740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.575749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.575763 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.575789 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:59Z","lastTransitionTime":"2026-01-27T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.583733 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.596598 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b34cd5aa-e234-4132-a206-ee911234e4fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d421b97e5f8a27808a726111b6512ca6beb22600f7ce6b0d6b181c0c9a94c269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2175d560bd3b49088520c674e6668143955bdbeb0c8fc99c8186146ab4b733e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75205c7a7ad7b74b6a4c04b1f29c57d66e8899e41700cff45fbdcbc162a251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.609661 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"2026-01-27T09:54:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c708bd02-e8af-4686-84a1-1c9b692d637a\\\\n2026-01-27T09:54:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c708bd02-e8af-4686-84a1-1c9b692d637a to /host/opt/cni/bin/\\\\n2026-01-27T09:54:14Z [verbose] multus-daemon started\\\\n2026-01-27T09:54:14Z [verbose] Readiness Indicator file check\\\\n2026-01-27T09:54:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.628095 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:36Z\\\",\\\"message\\\":\\\"p_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 09:54:36.147597 6525 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0127 09:54:36.147615 6525 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 2.274197ms\\\\nI0127 09:54:36.147635 6525 services_controller.go:356] Processing sync for service openshift-etcd/etcd for network=default\\\\nI0127 09:54:36.147646 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 09:54:36.147453 6525 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0127 09:54:36.147731 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-45hzs_openshift-ovn-kubernetes(8d38c693-da40-464a-9822-f98fb1b5ca35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.639661 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.651180 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.663538 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.673633 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.677396 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.677416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.677424 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.677435 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.677444 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:59Z","lastTransitionTime":"2026-01-27T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.684458 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.699492 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.709926 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:54:59Z is after 2025-08-24T17:21:41Z" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.779394 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.779420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.779427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.779441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.779451 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:59Z","lastTransitionTime":"2026-01-27T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.881501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.881540 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.881550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.881564 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.881574 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:59Z","lastTransitionTime":"2026-01-27T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.984133 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.984160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.984169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.984181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:54:59 crc kubenswrapper[4869]: I0127 09:54:59.984189 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:54:59Z","lastTransitionTime":"2026-01-27T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.017611 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 11:57:11.461226336 +0000 UTC Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.033108 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.033180 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.033216 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:00 crc kubenswrapper[4869]: E0127 09:55:00.033340 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.033367 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:00 crc kubenswrapper[4869]: E0127 09:55:00.033448 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:00 crc kubenswrapper[4869]: E0127 09:55:00.033537 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:00 crc kubenswrapper[4869]: E0127 09:55:00.033608 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.086546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.086585 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.086595 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.086609 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.086626 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:00Z","lastTransitionTime":"2026-01-27T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.188993 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.189024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.189036 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.189057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.189069 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:00Z","lastTransitionTime":"2026-01-27T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.291786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.291817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.291825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.291860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.291869 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:00Z","lastTransitionTime":"2026-01-27T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.394135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.394173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.394185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.394201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.394212 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:00Z","lastTransitionTime":"2026-01-27T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.489430 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xj5gd_c4e8dfa0-1849-457a-b564-4f77e534a7e0/kube-multus/0.log" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.489478 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xj5gd" event={"ID":"c4e8dfa0-1849-457a-b564-4f77e534a7e0","Type":"ContainerStarted","Data":"66392d6e395aa6ef33d94595eb5b6670f9205bc5591c35db295b8e29d84c7c63"} Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.496277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.496319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.496334 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.496350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.496363 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:00Z","lastTransitionTime":"2026-01-27T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.505094 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.546534 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b34cd5aa-e234-4132-a206-ee911234e4fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d421b97e5f8a27808a726111b6512ca6beb22600f7ce6b0d6b181c0c9a94c269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2175d560bd3b49088520c674e6668143955bdbeb0c8fc99c8186146ab4b733e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75205c7a7ad7b74b6a4c04b1f29c57d66e8899e41700cff45fbdcbc162a251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.557336 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66392d6e395aa6ef33d94595eb5b6670f9205bc5591c35db295b8e29d84c7c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"2026-01-27T09:54:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c708bd02-e8af-4686-84a1-1c9b692d637a\\\\n2026-01-27T09:54:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c708bd02-e8af-4686-84a1-1c9b692d637a to /host/opt/cni/bin/\\\\n2026-01-27T09:54:14Z [verbose] multus-daemon started\\\\n2026-01-27T09:54:14Z [verbose] Readiness Indicator file check\\\\n2026-01-27T09:54:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.572563 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:36Z\\\",\\\"message\\\":\\\"p_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 09:54:36.147597 6525 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0127 09:54:36.147615 6525 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 2.274197ms\\\\nI0127 09:54:36.147635 6525 services_controller.go:356] Processing sync for service openshift-etcd/etcd for network=default\\\\nI0127 09:54:36.147646 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 09:54:36.147453 6525 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0127 09:54:36.147731 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-45hzs_openshift-ovn-kubernetes(8d38c693-da40-464a-9822-f98fb1b5ca35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.580985 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.597420 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.598864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.598893 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.598902 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.598915 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.598923 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:00Z","lastTransitionTime":"2026-01-27T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.608431 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.617065 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.624660 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.638158 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.649951 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.663068 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.674081 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.684240 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.695033 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.701767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.701823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.701845 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.701899 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.701914 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:00Z","lastTransitionTime":"2026-01-27T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.704527 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.715920 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.727987 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:00Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.804874 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.805141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.805241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.805325 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.805442 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:00Z","lastTransitionTime":"2026-01-27T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.907724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.907771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.907783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.907801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:00 crc kubenswrapper[4869]: I0127 09:55:00.907815 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:00Z","lastTransitionTime":"2026-01-27T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.009889 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.009920 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.009929 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.009942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.009950 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:01Z","lastTransitionTime":"2026-01-27T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.018255 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 16:59:06.982687314 +0000 UTC Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.111628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.111657 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.111665 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.111677 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.111686 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:01Z","lastTransitionTime":"2026-01-27T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.213710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.213753 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.213764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.213780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.213791 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:01Z","lastTransitionTime":"2026-01-27T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.316679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.316724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.316732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.316745 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.316755 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:01Z","lastTransitionTime":"2026-01-27T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.419239 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.419279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.419294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.419314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.419330 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:01Z","lastTransitionTime":"2026-01-27T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.520879 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.520922 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.520936 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.520952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.520964 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:01Z","lastTransitionTime":"2026-01-27T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.622978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.623022 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.623034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.623052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.623064 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:01Z","lastTransitionTime":"2026-01-27T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.725258 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.725294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.725304 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.725353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.725363 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:01Z","lastTransitionTime":"2026-01-27T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.827811 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.827866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.827877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.827893 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.827907 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:01Z","lastTransitionTime":"2026-01-27T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.930249 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.930286 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.930296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.930310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:01 crc kubenswrapper[4869]: I0127 09:55:01.930320 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:01Z","lastTransitionTime":"2026-01-27T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.018969 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 19:46:39.034302967 +0000 UTC Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.032427 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:02 crc kubenswrapper[4869]: E0127 09:55:02.032525 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.032621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.032712 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:02 crc kubenswrapper[4869]: E0127 09:55:02.032769 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.032778 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.032918 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.032670 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:02 crc kubenswrapper[4869]: E0127 09:55:02.033033 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.032975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.033057 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.033062 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:02Z","lastTransitionTime":"2026-01-27T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:02 crc kubenswrapper[4869]: E0127 09:55:02.033151 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.045874 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.058516 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.069161 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.079751 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.090044 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.099058 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.111025 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.128753 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.134554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.134585 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.134620 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.134639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.134648 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:02Z","lastTransitionTime":"2026-01-27T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.139089 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b34cd5aa-e234-4132-a206-ee911234e4fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d421b97e5f8a27808a726111b6512ca6beb22600f7ce6b0d6b181c0c9a94c269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2175d560bd3b49088520c674e6668143955bdbeb0c8fc99c8186146ab4b733e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75205c7a7ad7b74b6a4c04b1f29c57d66e8899e41700cff45fbdcbc162a251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.150720 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66392d6e395aa6ef33d94595eb5b6670f9205bc5591c35db295b8e29d84c7c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"2026-01-27T09:54:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c708bd02-e8af-4686-84a1-1c9b692d637a\\\\n2026-01-27T09:54:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c708bd02-e8af-4686-84a1-1c9b692d637a to /host/opt/cni/bin/\\\\n2026-01-27T09:54:14Z [verbose] multus-daemon started\\\\n2026-01-27T09:54:14Z [verbose] Readiness Indicator file check\\\\n2026-01-27T09:54:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.166182 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:36Z\\\",\\\"message\\\":\\\"p_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 09:54:36.147597 6525 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0127 09:54:36.147615 6525 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 2.274197ms\\\\nI0127 09:54:36.147635 6525 services_controller.go:356] Processing sync for service openshift-etcd/etcd for network=default\\\\nI0127 09:54:36.147646 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 09:54:36.147453 6525 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0127 09:54:36.147731 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-45hzs_openshift-ovn-kubernetes(8d38c693-da40-464a-9822-f98fb1b5ca35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.175856 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.185559 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.196334 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.205625 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.213699 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.224270 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.232047 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:02Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.236485 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.236516 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.236529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.236545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.236556 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:02Z","lastTransitionTime":"2026-01-27T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.339318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.339360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.339370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.339384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.339395 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:02Z","lastTransitionTime":"2026-01-27T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.441497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.441546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.441560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.441577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.441590 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:02Z","lastTransitionTime":"2026-01-27T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.543882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.543930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.543941 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.543958 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.543971 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:02Z","lastTransitionTime":"2026-01-27T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.646324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.646373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.646385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.646407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.646419 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:02Z","lastTransitionTime":"2026-01-27T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.748618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.748655 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.748664 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.748678 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.748689 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:02Z","lastTransitionTime":"2026-01-27T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.850379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.850419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.850429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.850444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.850453 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:02Z","lastTransitionTime":"2026-01-27T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.952640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.952676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.952685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.952698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:02 crc kubenswrapper[4869]: I0127 09:55:02.952707 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:02Z","lastTransitionTime":"2026-01-27T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.019825 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 07:54:04.010908735 +0000 UTC Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.055073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.055114 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.055123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.055141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.055150 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:03Z","lastTransitionTime":"2026-01-27T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.157317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.157364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.157374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.157389 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.157400 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:03Z","lastTransitionTime":"2026-01-27T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.259529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.259575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.259586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.259602 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.259617 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:03Z","lastTransitionTime":"2026-01-27T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.361634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.361672 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.361681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.361695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.361705 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:03Z","lastTransitionTime":"2026-01-27T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.464298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.464348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.464359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.464373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.464384 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:03Z","lastTransitionTime":"2026-01-27T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.567169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.567214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.567222 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.567238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.567248 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:03Z","lastTransitionTime":"2026-01-27T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.669039 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.669088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.669101 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.669115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.669126 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:03Z","lastTransitionTime":"2026-01-27T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.771071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.771108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.771140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.771422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.771452 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:03Z","lastTransitionTime":"2026-01-27T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.873233 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.873268 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.873303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.873339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.873352 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:03Z","lastTransitionTime":"2026-01-27T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.975797 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.975823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.975837 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.975874 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:03 crc kubenswrapper[4869]: I0127 09:55:03.975887 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:03Z","lastTransitionTime":"2026-01-27T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.020891 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:05:16.728932482 +0000 UTC Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.032240 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.032265 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.032291 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.032303 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:04 crc kubenswrapper[4869]: E0127 09:55:04.032376 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:04 crc kubenswrapper[4869]: E0127 09:55:04.032485 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:04 crc kubenswrapper[4869]: E0127 09:55:04.032576 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:04 crc kubenswrapper[4869]: E0127 09:55:04.032634 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.078098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.078139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.078149 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.078164 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.078172 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:04Z","lastTransitionTime":"2026-01-27T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.179944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.179976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.179986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.179998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.180008 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:04Z","lastTransitionTime":"2026-01-27T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.282374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.282412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.282422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.282436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.282446 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:04Z","lastTransitionTime":"2026-01-27T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.384398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.384427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.384437 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.384453 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.384464 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:04Z","lastTransitionTime":"2026-01-27T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.486780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.486825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.486857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.486875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.486886 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:04Z","lastTransitionTime":"2026-01-27T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.588940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.589003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.589011 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.589027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.589037 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:04Z","lastTransitionTime":"2026-01-27T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.692003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.692046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.692056 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.692069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.692079 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:04Z","lastTransitionTime":"2026-01-27T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.794666 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.794719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.794729 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.794747 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.794759 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:04Z","lastTransitionTime":"2026-01-27T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.897329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.897377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.897389 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.897405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.897418 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:04Z","lastTransitionTime":"2026-01-27T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.999807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.999872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:04 crc kubenswrapper[4869]: I0127 09:55:04.999884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:04.999898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:04.999908 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:04Z","lastTransitionTime":"2026-01-27T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.021183 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 19:56:40.817379542 +0000 UTC Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.102004 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.102052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.102064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.102082 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.102094 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:05Z","lastTransitionTime":"2026-01-27T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.204394 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.204434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.204444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.204458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.204469 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:05Z","lastTransitionTime":"2026-01-27T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.308224 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.308269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.308280 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.308296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.308308 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:05Z","lastTransitionTime":"2026-01-27T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.362409 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.363231 4869 scope.go:117] "RemoveContainer" containerID="e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.410700 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.410744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.410755 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.410786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.410798 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:05Z","lastTransitionTime":"2026-01-27T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.506542 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovnkube-controller/2.log" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.512502 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerStarted","Data":"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690"} Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.513432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.513451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.513532 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.513556 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.513737 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.513749 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:05Z","lastTransitionTime":"2026-01-27T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.532455 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.554492 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.573291 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.583716 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.600375 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.616322 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.619716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.619754 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.619767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.619785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.619798 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:05Z","lastTransitionTime":"2026-01-27T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.630231 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.651785 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.663848 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b34cd5aa-e234-4132-a206-ee911234e4fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d421b97e5f8a27808a726111b6512ca6beb22600f7ce6b0d6b181c0c9a94c269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2175d560bd3b49088520c674e6668143955bdbeb0c8fc99c8186146ab4b733e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75205c7a7ad7b74b6a4c04b1f29c57d66e8899e41700cff45fbdcbc162a251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.676457 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66392d6e395aa6ef33d94595eb5b6670f9205bc5591c35db295b8e29d84c7c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"2026-01-27T09:54:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c708bd02-e8af-4686-84a1-1c9b692d637a\\\\n2026-01-27T09:54:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c708bd02-e8af-4686-84a1-1c9b692d637a to /host/opt/cni/bin/\\\\n2026-01-27T09:54:14Z [verbose] multus-daemon started\\\\n2026-01-27T09:54:14Z [verbose] Readiness Indicator file check\\\\n2026-01-27T09:54:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.699558 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:36Z\\\",\\\"message\\\":\\\"p_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 09:54:36.147597 6525 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0127 09:54:36.147615 6525 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 2.274197ms\\\\nI0127 09:54:36.147635 6525 services_controller.go:356] Processing sync for service openshift-etcd/etcd for network=default\\\\nI0127 09:54:36.147646 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 09:54:36.147453 6525 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0127 09:54:36.147731 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.712702 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.721372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.721391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.721398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.721410 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.721418 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:05Z","lastTransitionTime":"2026-01-27T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.730713 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.766781 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.781577 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.797432 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.809376 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.822753 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:05Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.824068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.824098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.824110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.824127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.824138 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:05Z","lastTransitionTime":"2026-01-27T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.926370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.926420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.926430 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.926443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:05 crc kubenswrapper[4869]: I0127 09:55:05.926452 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:05Z","lastTransitionTime":"2026-01-27T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.021405 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:14:18.774534703 +0000 UTC Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.028936 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.028978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.028987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.029002 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.029013 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:06Z","lastTransitionTime":"2026-01-27T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.032228 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.032282 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:06 crc kubenswrapper[4869]: E0127 09:55:06.032322 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.032365 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.032390 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:06 crc kubenswrapper[4869]: E0127 09:55:06.032502 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:06 crc kubenswrapper[4869]: E0127 09:55:06.032541 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:06 crc kubenswrapper[4869]: E0127 09:55:06.032590 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.130972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.131008 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.131016 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.131030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.131038 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:06Z","lastTransitionTime":"2026-01-27T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.233545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.233573 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.233585 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.233597 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.233606 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:06Z","lastTransitionTime":"2026-01-27T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.336269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.336315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.336324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.336338 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.336348 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:06Z","lastTransitionTime":"2026-01-27T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.439335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.439407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.439432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.439462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.439485 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:06Z","lastTransitionTime":"2026-01-27T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.520149 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovnkube-controller/3.log" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.521304 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovnkube-controller/2.log" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.525883 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerID="2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690" exitCode=1 Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.525949 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerDied","Data":"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690"} Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.526042 4869 scope.go:117] "RemoveContainer" containerID="e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.527513 4869 scope.go:117] "RemoveContainer" containerID="2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690" Jan 27 09:55:06 crc kubenswrapper[4869]: E0127 09:55:06.527784 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-45hzs_openshift-ovn-kubernetes(8d38c693-da40-464a-9822-f98fb1b5ca35)\"" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.545033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.545094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.545111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.545137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.545155 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:06Z","lastTransitionTime":"2026-01-27T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.559126 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.574635 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.596538 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.609498 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.624534 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.640775 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.648634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.648720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.648738 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.648793 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.648810 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:06Z","lastTransitionTime":"2026-01-27T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.657281 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.671972 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.684476 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.698504 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.710800 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.720330 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.732351 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.750618 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.752027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.752120 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.752131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.752180 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.752192 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:06Z","lastTransitionTime":"2026-01-27T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.761238 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b34cd5aa-e234-4132-a206-ee911234e4fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d421b97e5f8a27808a726111b6512ca6beb22600f7ce6b0d6b181c0c9a94c269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2175d560bd3b49088520c674e6668143955bdbeb0c8fc99c8186146ab4b733e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75205c7a7ad7b74b6a4c04b1f29c57d66e8899e41700cff45fbdcbc162a251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.772280 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66392d6e395aa6ef33d94595eb5b6670f9205bc5591c35db295b8e29d84c7c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"2026-01-27T09:54:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c708bd02-e8af-4686-84a1-1c9b692d637a\\\\n2026-01-27T09:54:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c708bd02-e8af-4686-84a1-1c9b692d637a to /host/opt/cni/bin/\\\\n2026-01-27T09:54:14Z [verbose] multus-daemon started\\\\n2026-01-27T09:54:14Z [verbose] Readiness Indicator file check\\\\n2026-01-27T09:54:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.791053 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9eb78ed1123343117c4139a39a25e40772f20caaf1500755bf082c4b60ecd89\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:36Z\\\",\\\"message\\\":\\\"p_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.189:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d389393c-7ba9-422c-b3f5-06e391d537d2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 09:54:36.147597 6525 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}\\\\nI0127 09:54:36.147615 6525 services_controller.go:360] Finished syncing service route-controller-manager on namespace openshift-route-controller-manager for network=default : 2.274197ms\\\\nI0127 09:54:36.147635 6525 services_controller.go:356] Processing sync for service openshift-etcd/etcd for network=default\\\\nI0127 09:54:36.147646 6525 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 09:54:36.147453 6525 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nF0127 09:54:36.147731 6525 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:55:06Z\\\",\\\"message\\\":\\\"1/apis/informers/externalversions/factory.go:140\\\\nI0127 09:55:06.178485 6961 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:55:06.178786 6961 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 09:55:06.179107 6961 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 09:55:06.179406 6961 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 09:55:06.179456 6961 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 09:55:06.179465 6961 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 09:55:06.179486 6961 factory.go:656] Stopping watch factory\\\\nI0127 09:55:06.179505 6961 ovnkube.go:599] Stopped ovnkube\\\\nI0127 09:55:06.179523 6961 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 09:55:06.179529 6961 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 09:55:06.179541 6961 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 09:55:06.179570 6961 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0127 09:55:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.802987 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:06Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.854949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.854986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.855015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.855031 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.855042 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:06Z","lastTransitionTime":"2026-01-27T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.958576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.958678 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.958694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.958722 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:06 crc kubenswrapper[4869]: I0127 09:55:06.958740 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:06Z","lastTransitionTime":"2026-01-27T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.022577 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 07:51:10.559628736 +0000 UTC Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.060943 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.060990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.061005 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.061025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.061041 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:07Z","lastTransitionTime":"2026-01-27T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.163973 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.164022 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.164034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.164052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.164065 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:07Z","lastTransitionTime":"2026-01-27T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.266622 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.266706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.266718 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.266735 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.266748 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:07Z","lastTransitionTime":"2026-01-27T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.370015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.370082 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.370100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.370127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.370145 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:07Z","lastTransitionTime":"2026-01-27T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.473305 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.473348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.473359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.473374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.473387 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:07Z","lastTransitionTime":"2026-01-27T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.532575 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovnkube-controller/3.log" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.538341 4869 scope.go:117] "RemoveContainer" containerID="2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690" Jan 27 09:55:07 crc kubenswrapper[4869]: E0127 09:55:07.538710 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-45hzs_openshift-ovn-kubernetes(8d38c693-da40-464a-9822-f98fb1b5ca35)\"" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.561813 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.574317 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b34cd5aa-e234-4132-a206-ee911234e4fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d421b97e5f8a27808a726111b6512ca6beb22600f7ce6b0d6b181c0c9a94c269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2175d560bd3b49088520c674e6668143955bdbeb0c8fc99c8186146ab4b733e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75205c7a7ad7b74b6a4c04b1f29c57d66e8899e41700cff45fbdcbc162a251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.576018 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.576057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.576092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.576122 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.576135 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:07Z","lastTransitionTime":"2026-01-27T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.592407 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66392d6e395aa6ef33d94595eb5b6670f9205bc5591c35db295b8e29d84c7c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"2026-01-27T09:54:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c708bd02-e8af-4686-84a1-1c9b692d637a\\\\n2026-01-27T09:54:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c708bd02-e8af-4686-84a1-1c9b692d637a to /host/opt/cni/bin/\\\\n2026-01-27T09:54:14Z [verbose] multus-daemon started\\\\n2026-01-27T09:54:14Z [verbose] Readiness Indicator file check\\\\n2026-01-27T09:54:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.610885 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:55:06Z\\\",\\\"message\\\":\\\"1/apis/informers/externalversions/factory.go:140\\\\nI0127 09:55:06.178485 6961 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:55:06.178786 6961 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 09:55:06.179107 6961 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 09:55:06.179406 6961 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 09:55:06.179456 6961 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 09:55:06.179465 6961 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 09:55:06.179486 6961 factory.go:656] Stopping watch factory\\\\nI0127 09:55:06.179505 6961 ovnkube.go:599] Stopped ovnkube\\\\nI0127 09:55:06.179523 6961 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 09:55:06.179529 6961 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 09:55:06.179541 6961 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 09:55:06.179570 6961 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0127 09:55:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:55:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-45hzs_openshift-ovn-kubernetes(8d38c693-da40-464a-9822-f98fb1b5ca35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.624536 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.642026 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.655499 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.665677 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.678483 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.679255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.679328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.679353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.679383 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.679406 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:07Z","lastTransitionTime":"2026-01-27T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.697456 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.709528 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.723822 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.739753 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.754660 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.772658 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.788168 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.788216 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.788229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.788249 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.788261 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:07Z","lastTransitionTime":"2026-01-27T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.793606 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.807434 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.820125 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:07Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.892729 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.892771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.892783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.892799 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.892810 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:07Z","lastTransitionTime":"2026-01-27T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.995695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.995740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.995752 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.995771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:07 crc kubenswrapper[4869]: I0127 09:55:07.995784 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:07Z","lastTransitionTime":"2026-01-27T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.023419 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 23:22:51.235435419 +0000 UTC Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.032779 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.032872 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.032879 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.032966 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:08 crc kubenswrapper[4869]: E0127 09:55:08.032979 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:08 crc kubenswrapper[4869]: E0127 09:55:08.033094 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:08 crc kubenswrapper[4869]: E0127 09:55:08.033243 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:08 crc kubenswrapper[4869]: E0127 09:55:08.033328 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.098484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.098517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.098526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.098540 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.098549 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:08Z","lastTransitionTime":"2026-01-27T09:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.201303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.201351 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.201363 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.201381 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.201393 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:08Z","lastTransitionTime":"2026-01-27T09:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.304209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.304269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.304283 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.304301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.304316 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:08Z","lastTransitionTime":"2026-01-27T09:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.406580 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.406637 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.406652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.406673 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.406688 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:08Z","lastTransitionTime":"2026-01-27T09:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.509595 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.509653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.509661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.509676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.509685 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:08Z","lastTransitionTime":"2026-01-27T09:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.611308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.611355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.611370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.611390 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.611405 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:08Z","lastTransitionTime":"2026-01-27T09:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.715114 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.715175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.715193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.715217 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.715238 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:08Z","lastTransitionTime":"2026-01-27T09:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.818301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.818346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.818361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.818380 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.818396 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:08Z","lastTransitionTime":"2026-01-27T09:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.921522 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.921563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.921574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.921590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:08 crc kubenswrapper[4869]: I0127 09:55:08.921601 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:08Z","lastTransitionTime":"2026-01-27T09:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.023612 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 10:55:01.932439222 +0000 UTC Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.025318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.025421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.025443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.025468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.025486 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:09Z","lastTransitionTime":"2026-01-27T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.129092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.129161 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.129193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.129236 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.129262 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:09Z","lastTransitionTime":"2026-01-27T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.231011 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.231052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.231064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.231080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.231091 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:09Z","lastTransitionTime":"2026-01-27T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.333176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.333229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.333245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.333269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.333286 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:09Z","lastTransitionTime":"2026-01-27T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.435657 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.435716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.435736 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.435759 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.435776 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:09Z","lastTransitionTime":"2026-01-27T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.538772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.538822 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.538857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.538875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.538887 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:09Z","lastTransitionTime":"2026-01-27T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.641659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.641689 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.641697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.641712 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.641722 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:09Z","lastTransitionTime":"2026-01-27T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.743676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.743732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.743751 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.743771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.743786 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:09Z","lastTransitionTime":"2026-01-27T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.818734 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.818769 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.818777 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.818791 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.818800 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:09Z","lastTransitionTime":"2026-01-27T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:09 crc kubenswrapper[4869]: E0127 09:55:09.834270 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:09Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.837746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.837778 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.837786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.837798 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.837807 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:09Z","lastTransitionTime":"2026-01-27T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:09 crc kubenswrapper[4869]: E0127 09:55:09.868486 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:09Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.876511 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.876565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.876586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.876616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.876639 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:09Z","lastTransitionTime":"2026-01-27T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:09 crc kubenswrapper[4869]: E0127 09:55:09.900304 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:09Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.909995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.910042 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.910052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.910070 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.910080 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:09Z","lastTransitionTime":"2026-01-27T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:09 crc kubenswrapper[4869]: E0127 09:55:09.929019 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:09Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.932811 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.932861 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.932873 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.932886 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.932895 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:09Z","lastTransitionTime":"2026-01-27T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:09 crc kubenswrapper[4869]: E0127 09:55:09.945020 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:09Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:09 crc kubenswrapper[4869]: E0127 09:55:09.945131 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.946820 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.946868 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.946880 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.946895 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:09 crc kubenswrapper[4869]: I0127 09:55:09.946907 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:09Z","lastTransitionTime":"2026-01-27T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.024004 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:36:27.928812135 +0000 UTC Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.032628 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.032685 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.032664 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.032727 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:10 crc kubenswrapper[4869]: E0127 09:55:10.032965 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:10 crc kubenswrapper[4869]: E0127 09:55:10.033067 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:10 crc kubenswrapper[4869]: E0127 09:55:10.033015 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:10 crc kubenswrapper[4869]: E0127 09:55:10.033121 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.045975 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.049917 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.049945 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.049954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.049966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.049977 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:10Z","lastTransitionTime":"2026-01-27T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.153093 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.153153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.153172 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.153195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.153214 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:10Z","lastTransitionTime":"2026-01-27T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.256055 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.256108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.256127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.256150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.256167 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:10Z","lastTransitionTime":"2026-01-27T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.359088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.359151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.359167 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.359191 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.359210 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:10Z","lastTransitionTime":"2026-01-27T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.462424 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.462467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.462478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.462495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.462508 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:10Z","lastTransitionTime":"2026-01-27T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.565096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.565162 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.565184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.565212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.565238 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:10Z","lastTransitionTime":"2026-01-27T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.667282 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.667326 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.667342 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.667363 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.667381 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:10Z","lastTransitionTime":"2026-01-27T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.770061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.770113 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.770129 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.770150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.770168 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:10Z","lastTransitionTime":"2026-01-27T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.872285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.872313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.872320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.872333 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.872342 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:10Z","lastTransitionTime":"2026-01-27T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.974915 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.974951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.974958 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.974972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:10 crc kubenswrapper[4869]: I0127 09:55:10.974981 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:10Z","lastTransitionTime":"2026-01-27T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.024693 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 13:03:00.410565786 +0000 UTC Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.077440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.077506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.077524 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.077548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.077564 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:11Z","lastTransitionTime":"2026-01-27T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.180410 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.180444 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.180453 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.180466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.180475 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:11Z","lastTransitionTime":"2026-01-27T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.284021 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.284091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.284121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.284151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.284175 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:11Z","lastTransitionTime":"2026-01-27T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.387329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.387383 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.387399 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.387420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.387438 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:11Z","lastTransitionTime":"2026-01-27T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.490178 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.490230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.490249 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.490269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.490283 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:11Z","lastTransitionTime":"2026-01-27T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.592653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.592694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.592705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.592721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.592732 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:11Z","lastTransitionTime":"2026-01-27T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.695960 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.696024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.696045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.696074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.696097 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:11Z","lastTransitionTime":"2026-01-27T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.798376 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.798437 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.798454 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.798478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.798495 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:11Z","lastTransitionTime":"2026-01-27T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.900598 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.900653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.900670 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.900694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:11 crc kubenswrapper[4869]: I0127 09:55:11.900710 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:11Z","lastTransitionTime":"2026-01-27T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.003467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.003774 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.003793 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.003821 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.003885 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:12Z","lastTransitionTime":"2026-01-27T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.025580 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 00:07:43.099558197 +0000 UTC Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.032940 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:12 crc kubenswrapper[4869]: E0127 09:55:12.033055 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.033221 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.033391 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:12 crc kubenswrapper[4869]: E0127 09:55:12.033446 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.033518 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:12 crc kubenswrapper[4869]: E0127 09:55:12.033811 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:12 crc kubenswrapper[4869]: E0127 09:55:12.033978 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.047453 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.063998 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.078539 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.091686 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.106356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.106380 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.106388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.106400 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.106410 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:12Z","lastTransitionTime":"2026-01-27T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.108744 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.120574 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.133621 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.146544 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.157619 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.172206 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.181594 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.193504 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.206177 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.208950 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.208976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.208986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.208999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.209006 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:12Z","lastTransitionTime":"2026-01-27T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.218894 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acce8389-7668-40c0-ab94-904f0a1dc50b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3da1c777979a54adf96b111ac134e777821f76fb11b8b9367e390b8c3ed1bac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772664f48020be30ae006068e7a58a03ed8945a32e95eae01dec68ca47300424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://772664f48020be30ae006068e7a58a03ed8945a32e95eae01dec68ca47300424\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.244592 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.255441 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b34cd5aa-e234-4132-a206-ee911234e4fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d421b97e5f8a27808a726111b6512ca6beb22600f7ce6b0d6b181c0c9a94c269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2175d560bd3b49088520c674e6668143955bdbeb0c8fc99c8186146ab4b733e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75205c7a7ad7b74b6a4c04b1f29c57d66e8899e41700cff45fbdcbc162a251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.271530 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66392d6e395aa6ef33d94595eb5b6670f9205bc5591c35db295b8e29d84c7c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"2026-01-27T09:54:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c708bd02-e8af-4686-84a1-1c9b692d637a\\\\n2026-01-27T09:54:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c708bd02-e8af-4686-84a1-1c9b692d637a to /host/opt/cni/bin/\\\\n2026-01-27T09:54:14Z [verbose] multus-daemon started\\\\n2026-01-27T09:54:14Z [verbose] Readiness Indicator file check\\\\n2026-01-27T09:54:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.294284 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:55:06Z\\\",\\\"message\\\":\\\"1/apis/informers/externalversions/factory.go:140\\\\nI0127 09:55:06.178485 6961 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:55:06.178786 6961 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 09:55:06.179107 6961 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 09:55:06.179406 6961 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 09:55:06.179456 6961 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 09:55:06.179465 6961 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 09:55:06.179486 6961 factory.go:656] Stopping watch factory\\\\nI0127 09:55:06.179505 6961 ovnkube.go:599] Stopped ovnkube\\\\nI0127 09:55:06.179523 6961 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 09:55:06.179529 6961 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 09:55:06.179541 6961 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 09:55:06.179570 6961 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0127 09:55:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:55:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-45hzs_openshift-ovn-kubernetes(8d38c693-da40-464a-9822-f98fb1b5ca35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.304004 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:12Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.311185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.311211 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.311220 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.311234 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.311244 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:12Z","lastTransitionTime":"2026-01-27T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.412902 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.412977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.413003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.413035 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.413055 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:12Z","lastTransitionTime":"2026-01-27T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.515377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.515419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.515430 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.515450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.515463 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:12Z","lastTransitionTime":"2026-01-27T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.617426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.617480 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.617497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.617561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.617591 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:12Z","lastTransitionTime":"2026-01-27T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.721124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.721185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.721203 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.721226 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.721244 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:12Z","lastTransitionTime":"2026-01-27T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.824772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.824822 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.824869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.824892 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.824909 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:12Z","lastTransitionTime":"2026-01-27T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.928184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.928231 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.928241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.928257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:12 crc kubenswrapper[4869]: I0127 09:55:12.928270 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:12Z","lastTransitionTime":"2026-01-27T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.026256 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:08:17.073330817 +0000 UTC Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.030966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.031027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.031052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.031080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.031103 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:13Z","lastTransitionTime":"2026-01-27T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.133364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.133399 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.133410 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.133425 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.133436 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:13Z","lastTransitionTime":"2026-01-27T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.236301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.236336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.236348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.236363 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.236372 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:13Z","lastTransitionTime":"2026-01-27T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.338437 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.338486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.338496 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.338511 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.338522 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:13Z","lastTransitionTime":"2026-01-27T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.441362 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.441403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.441415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.441437 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.441454 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:13Z","lastTransitionTime":"2026-01-27T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.543795 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.543900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.543922 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.543955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.543977 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:13Z","lastTransitionTime":"2026-01-27T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.647285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.647346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.647368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.647399 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.647418 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:13Z","lastTransitionTime":"2026-01-27T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.750801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.750875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.750888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.750914 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.750926 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:13Z","lastTransitionTime":"2026-01-27T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.853274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.853332 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.853350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.853374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.853395 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:13Z","lastTransitionTime":"2026-01-27T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.955795 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.955876 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.955889 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.955905 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:13 crc kubenswrapper[4869]: I0127 09:55:13.955917 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:13Z","lastTransitionTime":"2026-01-27T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.026883 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:33:55.216062588 +0000 UTC Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.032267 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.032379 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:14 crc kubenswrapper[4869]: E0127 09:55:14.032489 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.032503 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.032291 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:14 crc kubenswrapper[4869]: E0127 09:55:14.032640 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:14 crc kubenswrapper[4869]: E0127 09:55:14.032774 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:14 crc kubenswrapper[4869]: E0127 09:55:14.033073 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.058942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.059008 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.059032 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.059060 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.059081 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:14Z","lastTransitionTime":"2026-01-27T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.161805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.162204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.162356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.162490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.162610 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:14Z","lastTransitionTime":"2026-01-27T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.265548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.266040 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.266195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.266405 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.266615 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:14Z","lastTransitionTime":"2026-01-27T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.369612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.370062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.370261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.370474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.370704 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:14Z","lastTransitionTime":"2026-01-27T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.473622 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.473657 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.473668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.473696 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.473705 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:14Z","lastTransitionTime":"2026-01-27T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.576331 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.576371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.576382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.576403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.576418 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:14Z","lastTransitionTime":"2026-01-27T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.679050 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.679101 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.679113 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.679129 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.679140 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:14Z","lastTransitionTime":"2026-01-27T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.781889 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.781942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.781954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.781976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.781987 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:14Z","lastTransitionTime":"2026-01-27T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.885643 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.885687 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.885698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.885715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.885727 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:14Z","lastTransitionTime":"2026-01-27T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.988169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.988213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.988224 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.988241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:14 crc kubenswrapper[4869]: I0127 09:55:14.988252 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:14Z","lastTransitionTime":"2026-01-27T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.027996 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 03:10:43.91050588 +0000 UTC Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.090617 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.090657 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.090669 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.090689 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.090703 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:15Z","lastTransitionTime":"2026-01-27T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.193959 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.194023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.194046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.194074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.194093 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:15Z","lastTransitionTime":"2026-01-27T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.296388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.296439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.296447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.296459 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.296470 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:15Z","lastTransitionTime":"2026-01-27T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.399198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.399251 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.399263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.399284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.399296 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:15Z","lastTransitionTime":"2026-01-27T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.502323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.502381 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.502403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.502432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.502454 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:15Z","lastTransitionTime":"2026-01-27T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.604740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.605095 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.605137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.605194 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.605251 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:15Z","lastTransitionTime":"2026-01-27T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.707401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.707446 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.707457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.707473 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.707485 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:15Z","lastTransitionTime":"2026-01-27T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.809787 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.809861 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.809872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.809888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.809899 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:15Z","lastTransitionTime":"2026-01-27T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.839969 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.840041 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.840064 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.840093 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:15 crc kubenswrapper[4869]: E0127 09:55:15.840168 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.840150948 +0000 UTC m=+148.460575031 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:55:15 crc kubenswrapper[4869]: E0127 09:55:15.840232 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 09:55:15 crc kubenswrapper[4869]: E0127 09:55:15.840316 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.840291942 +0000 UTC m=+148.460716065 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 09:55:15 crc kubenswrapper[4869]: E0127 09:55:15.840521 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 09:55:15 crc kubenswrapper[4869]: E0127 09:55:15.840540 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 09:55:15 crc kubenswrapper[4869]: E0127 09:55:15.840570 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:55:15 crc kubenswrapper[4869]: E0127 09:55:15.840643 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.840617691 +0000 UTC m=+148.461041774 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:55:15 crc kubenswrapper[4869]: E0127 09:55:15.840522 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 09:55:15 crc kubenswrapper[4869]: E0127 09:55:15.840816 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.840803407 +0000 UTC m=+148.461227560 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.912650 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.912735 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.912759 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.912786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.912803 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:15Z","lastTransitionTime":"2026-01-27T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:15 crc kubenswrapper[4869]: I0127 09:55:15.941184 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:15 crc kubenswrapper[4869]: E0127 09:55:15.941334 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 09:55:15 crc kubenswrapper[4869]: E0127 09:55:15.941371 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 09:55:15 crc kubenswrapper[4869]: E0127 09:55:15.941382 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:55:15 crc kubenswrapper[4869]: E0127 09:55:15.941432 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.941416884 +0000 UTC m=+148.561840967 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.014982 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.015017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.015025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.015086 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.015097 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:16Z","lastTransitionTime":"2026-01-27T09:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.028705 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 06:24:49.525420113 +0000 UTC Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.033034 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.033116 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:16 crc kubenswrapper[4869]: E0127 09:55:16.033143 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.033042 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.033188 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:16 crc kubenswrapper[4869]: E0127 09:55:16.033320 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:16 crc kubenswrapper[4869]: E0127 09:55:16.033414 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:16 crc kubenswrapper[4869]: E0127 09:55:16.033496 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.118185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.118230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.118246 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.118271 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.118288 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:16Z","lastTransitionTime":"2026-01-27T09:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.221197 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.221230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.221239 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.221253 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.221292 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:16Z","lastTransitionTime":"2026-01-27T09:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.323584 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.323622 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.323633 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.323648 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.323658 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:16Z","lastTransitionTime":"2026-01-27T09:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.426057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.426100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.426114 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.426132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.426144 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:16Z","lastTransitionTime":"2026-01-27T09:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.528576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.528641 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.528656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.528679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.528696 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:16Z","lastTransitionTime":"2026-01-27T09:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.630529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.630559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.630568 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.630584 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.630594 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:16Z","lastTransitionTime":"2026-01-27T09:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.732785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.732849 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.732863 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.732881 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.732892 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:16Z","lastTransitionTime":"2026-01-27T09:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.834782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.834815 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.834825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.834855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.834865 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:16Z","lastTransitionTime":"2026-01-27T09:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.937067 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.937129 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.937140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.937155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:16 crc kubenswrapper[4869]: I0127 09:55:16.937168 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:16Z","lastTransitionTime":"2026-01-27T09:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.029391 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 17:25:58.714562657 +0000 UTC Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.039518 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.039571 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.039582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.039594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.039621 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:17Z","lastTransitionTime":"2026-01-27T09:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.141952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.141983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.141991 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.142004 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.142012 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:17Z","lastTransitionTime":"2026-01-27T09:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.243991 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.244088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.244100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.244117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.244129 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:17Z","lastTransitionTime":"2026-01-27T09:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.346401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.346432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.346443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.346458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.346468 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:17Z","lastTransitionTime":"2026-01-27T09:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.448666 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.448718 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.448730 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.448748 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.448760 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:17Z","lastTransitionTime":"2026-01-27T09:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.551278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.551313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.551324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.551341 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.551353 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:17Z","lastTransitionTime":"2026-01-27T09:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.653696 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.653757 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.653778 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.653806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.653827 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:17Z","lastTransitionTime":"2026-01-27T09:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.756582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.756620 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.756629 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.756641 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.756666 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:17Z","lastTransitionTime":"2026-01-27T09:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.858869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.858919 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.858930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.858947 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.858958 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:17Z","lastTransitionTime":"2026-01-27T09:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.961106 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.961164 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.961176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.961190 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:17 crc kubenswrapper[4869]: I0127 09:55:17.961211 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:17Z","lastTransitionTime":"2026-01-27T09:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.030403 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:30:26.467357324 +0000 UTC Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.033045 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.033091 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:18 crc kubenswrapper[4869]: E0127 09:55:18.033205 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.033428 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:18 crc kubenswrapper[4869]: E0127 09:55:18.033518 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.033539 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:18 crc kubenswrapper[4869]: E0127 09:55:18.033716 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:18 crc kubenswrapper[4869]: E0127 09:55:18.034147 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.034442 4869 scope.go:117] "RemoveContainer" containerID="2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690" Jan 27 09:55:18 crc kubenswrapper[4869]: E0127 09:55:18.034579 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-45hzs_openshift-ovn-kubernetes(8d38c693-da40-464a-9822-f98fb1b5ca35)\"" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.063092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.063118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.063127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.063137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.063145 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:18Z","lastTransitionTime":"2026-01-27T09:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.166516 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.166581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.166604 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.166621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.166632 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:18Z","lastTransitionTime":"2026-01-27T09:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.269588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.269644 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.269661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.269684 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.269701 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:18Z","lastTransitionTime":"2026-01-27T09:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.371701 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.371738 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.371746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.371759 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.371768 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:18Z","lastTransitionTime":"2026-01-27T09:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.473379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.473430 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.473441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.473455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.473464 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:18Z","lastTransitionTime":"2026-01-27T09:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.576313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.576357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.576369 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.576385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.576397 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:18Z","lastTransitionTime":"2026-01-27T09:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.678550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.678641 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.678663 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.678693 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.678710 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:18Z","lastTransitionTime":"2026-01-27T09:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.780820 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.780944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.780968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.781041 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.781068 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:18Z","lastTransitionTime":"2026-01-27T09:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.883466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.883521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.883539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.883561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.883582 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:18Z","lastTransitionTime":"2026-01-27T09:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.986985 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.987053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.987072 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.987094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:18 crc kubenswrapper[4869]: I0127 09:55:18.987110 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:18Z","lastTransitionTime":"2026-01-27T09:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.031580 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 10:57:26.946370798 +0000 UTC Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.090324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.090373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.090384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.090401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.090412 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:19Z","lastTransitionTime":"2026-01-27T09:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.193401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.193446 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.193458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.193472 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.193482 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:19Z","lastTransitionTime":"2026-01-27T09:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.297043 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.297126 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.297146 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.297173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.297192 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:19Z","lastTransitionTime":"2026-01-27T09:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.399969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.400051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.400071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.400096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.400117 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:19Z","lastTransitionTime":"2026-01-27T09:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.502789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.502872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.502889 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.502912 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.502929 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:19Z","lastTransitionTime":"2026-01-27T09:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.605508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.605570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.605586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.605611 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.605630 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:19Z","lastTransitionTime":"2026-01-27T09:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.708618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.708657 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.708668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.708680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.708690 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:19Z","lastTransitionTime":"2026-01-27T09:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.811479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.811534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.811545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.811563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.811574 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:19Z","lastTransitionTime":"2026-01-27T09:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.914503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.914550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.914561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.914577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:19 crc kubenswrapper[4869]: I0127 09:55:19.914589 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:19Z","lastTransitionTime":"2026-01-27T09:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.017755 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.017850 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.017862 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.017877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.017905 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:20Z","lastTransitionTime":"2026-01-27T09:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.032526 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:37:48.896474469 +0000 UTC Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.032805 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.032895 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.033038 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:20 crc kubenswrapper[4869]: E0127 09:55:20.033030 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:20 crc kubenswrapper[4869]: E0127 09:55:20.033190 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.033310 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:20 crc kubenswrapper[4869]: E0127 09:55:20.033362 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:20 crc kubenswrapper[4869]: E0127 09:55:20.033613 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.120608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.120686 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.120708 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.120736 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.120755 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:20Z","lastTransitionTime":"2026-01-27T09:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.224255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.224319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.224338 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.224362 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.224380 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:20Z","lastTransitionTime":"2026-01-27T09:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.278267 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.278328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.278338 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.278350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.278360 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:20Z","lastTransitionTime":"2026-01-27T09:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:20 crc kubenswrapper[4869]: E0127 09:55:20.291344 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.296092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.296149 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.296171 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.296199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.296216 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:20Z","lastTransitionTime":"2026-01-27T09:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:20 crc kubenswrapper[4869]: E0127 09:55:20.315608 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.320999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.321027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.321036 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.321049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.321060 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:20Z","lastTransitionTime":"2026-01-27T09:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:20 crc kubenswrapper[4869]: E0127 09:55:20.337556 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.341709 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.341735 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.341743 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.341756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.341765 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:20Z","lastTransitionTime":"2026-01-27T09:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:20 crc kubenswrapper[4869]: E0127 09:55:20.355716 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.359230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.359271 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.359284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.359301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.359313 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:20Z","lastTransitionTime":"2026-01-27T09:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:20 crc kubenswrapper[4869]: E0127 09:55:20.372246 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c689fb94-bab9-4f05-8ced-2230ba4f7ed7\\\",\\\"systemUUID\\\":\\\"8cdf7e61-b3ba-4c46-bd8a-b18a7fb3b94a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:20Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:20 crc kubenswrapper[4869]: E0127 09:55:20.372354 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.373522 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.373547 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.373556 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.373570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.373579 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:20Z","lastTransitionTime":"2026-01-27T09:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.476286 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.476328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.476341 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.476358 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.476371 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:20Z","lastTransitionTime":"2026-01-27T09:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.578379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.578418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.578427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.578441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.578450 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:20Z","lastTransitionTime":"2026-01-27T09:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.680342 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.680376 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.680385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.680398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.680408 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:20Z","lastTransitionTime":"2026-01-27T09:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.782871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.782924 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.782941 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.782974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.782991 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:20Z","lastTransitionTime":"2026-01-27T09:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.885764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.885822 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.885872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.885899 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.885916 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:20Z","lastTransitionTime":"2026-01-27T09:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.989108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.989160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.989176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.989198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:20 crc kubenswrapper[4869]: I0127 09:55:20.989214 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:20Z","lastTransitionTime":"2026-01-27T09:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.032935 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 08:30:24.322653094 +0000 UTC Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.091474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.091546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.091565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.091587 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.091603 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:21Z","lastTransitionTime":"2026-01-27T09:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.194147 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.194191 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.194202 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.194217 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.194227 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:21Z","lastTransitionTime":"2026-01-27T09:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.297103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.297156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.297172 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.297193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.297283 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:21Z","lastTransitionTime":"2026-01-27T09:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.400023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.400069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.400080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.400097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.400108 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:21Z","lastTransitionTime":"2026-01-27T09:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.502683 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.502738 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.502745 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.502758 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.502766 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:21Z","lastTransitionTime":"2026-01-27T09:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.606141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.606217 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.606239 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.606268 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.606290 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:21Z","lastTransitionTime":"2026-01-27T09:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.708688 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.708727 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.708739 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.708753 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.708763 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:21Z","lastTransitionTime":"2026-01-27T09:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.810954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.810992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.811000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.811017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.811028 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:21Z","lastTransitionTime":"2026-01-27T09:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.913355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.913393 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.913402 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.913419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:21 crc kubenswrapper[4869]: I0127 09:55:21.913427 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:21Z","lastTransitionTime":"2026-01-27T09:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.016017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.016063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.016073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.016088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.016097 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:22Z","lastTransitionTime":"2026-01-27T09:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.032606 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.032690 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:22 crc kubenswrapper[4869]: E0127 09:55:22.032727 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.032787 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.032866 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:22 crc kubenswrapper[4869]: E0127 09:55:22.032869 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:22 crc kubenswrapper[4869]: E0127 09:55:22.033015 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:22 crc kubenswrapper[4869]: E0127 09:55:22.033113 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.033153 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 10:02:58.694176362 +0000 UTC Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.043101 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bgt4x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b770b3-2bdc-4098-97a1-10a2dd539d16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f687bc3a1275a4d1603d28a1332ed5067c22ca00a77dae0505b079a67a371582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2jbv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bgt4x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.057502 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"610cadf1-85e4-40f1-a551-998262507ca2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40cdd9e064620c14c8d1065ad5206ac8b735e8a197e07cfbc0df23851d12ce14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://917a7d10cb74b27f37e3434240da7f17b94856aa265c7139c3511b04add3c33f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c7b825ff799f2039eda725e1410653c405bd642b1e4b7fc5f65cf2277030bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://827e6c6086d4f82d98c6533080b398d0c27d21731d416428810d6c053701cd34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ce097a1d4e652919529777f17599474fcec168edd280662428707a2cb91140c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39f1d4cc09fe40c02de142321f1619fc35a973a597f0541868bf553e9be0c65d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6692505613ddeb3608df42c528efac64301c6c5459e9baef3952b106646b046a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b626l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9pfwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.067789 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bv4rq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b78eed-8d48-4b1c-962d-35ae7b8c1468\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6df5f2f9bc004817b49775d5b862deedc06370941b6881b7541d1dc18d8389a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2p6x9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bv4rq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.078195 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a1c3f79-999a-4744-a008-3105e31e7a01\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f5612be93b9ba98ba090a39d7ae9f1213e2847b861a2e7aef335d232a769ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a23a278edd664f6ee7d3adb0911bc0ecc0878dd3c00a2c25aee4906785255\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be943416c64858b89093480f391a7c7e0898f3a847f967975c791e281a8c5796\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.090700 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:14Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f872df4222f1aedf152de5d9a2d953d68b8f8c67b794f74d6aacc22a08745ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.102924 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://290fecf0889828bff5d97e7ea064479e250a3ce353fb3102143945a90cd0a920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.114046 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4717b818caaaa5248c2b5937ffad2daf755cb2304aceeaf7a62a1ca135e3c957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2dfad6ea1b781d3206bf3a30d2d5a8c782c2c3616724cab733c0d434fb7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.118033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.118060 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.118069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.118081 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.118090 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:22Z","lastTransitionTime":"2026-01-27T09:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.127441 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a3e458-3f5f-46cf-b242-9a3986250bcf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6551d2fe8fb3bd3aca13a0584b9efa10f76a3e1b3fa090c977c894952e763ec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k25vw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2qh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.140287 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df853189-32d1-44e5-8016-631a6f2880f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07c2826308ac00d904e3f5e85796421150b10d87d5705c44b9a974986ee5537c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a0977701a311923ecf54012a82d2e5ca4804846c56019a08b28d7dd556af7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c2tzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xqf8x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.151426 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.161710 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.172811 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:11Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.194743 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55c32b0f-8923-45c7-8035-26900ba6048b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 09:54:12.699332 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 09:54:12.699582 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 09:54:12.700982 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2915630996/tls.crt::/tmp/serving-cert-2915630996/tls.key\\\\\\\"\\\\nI0127 09:54:13.060818 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 09:54:13.065102 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 09:54:13.065131 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 09:54:13.065157 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 09:54:13.065166 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 09:54:13.074948 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 09:54:13.075094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 09:54:13.075138 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0127 09:54:13.074980 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 09:54:13.075167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 09:54:13.075249 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 09:54:13.075273 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 09:54:13.075280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 09:54:13.084202 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.209268 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acce8389-7668-40c0-ab94-904f0a1dc50b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3da1c777979a54adf96b111ac134e777821f76fb11b8b9367e390b8c3ed1bac5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://772664f48020be30ae006068e7a58a03ed8945a32e95eae01dec68ca47300424\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://772664f48020be30ae006068e7a58a03ed8945a32e95eae01dec68ca47300424\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.221039 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.221086 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.221100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.221119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.221132 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:22Z","lastTransitionTime":"2026-01-27T09:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.228857 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d38c693-da40-464a-9822-f98fb1b5ca35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:55:06Z\\\",\\\"message\\\":\\\"1/apis/informers/externalversions/factory.go:140\\\\nI0127 09:55:06.178485 6961 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 09:55:06.178786 6961 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 09:55:06.179107 6961 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0127 09:55:06.179406 6961 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 09:55:06.179456 6961 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 09:55:06.179465 6961 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 09:55:06.179486 6961 factory.go:656] Stopping watch factory\\\\nI0127 09:55:06.179505 6961 ovnkube.go:599] Stopped ovnkube\\\\nI0127 09:55:06.179523 6961 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 09:55:06.179529 6961 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0127 09:55:06.179541 6961 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 09:55:06.179570 6961 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nF0127 09:55:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:55:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-45hzs_openshift-ovn-kubernetes(8d38c693-da40-464a-9822-f98fb1b5ca35)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl2nv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-45hzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.239257 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p5frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0bf72cba-f163-4dc2-b157-cfeb56d0177b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvf4w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p5frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.262465 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1072ac78-cddd-4240-8f01-df735bc46fab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://161db4b23095a99400912f90f322e141bd6a6b0944123edaedcd50c0fbeab7b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7f26bf2e7a0e13b14dcebaa9cc0539256180d24ccd603913d70106f603c9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7509b57c61e17005db0d5ae2d7f563ec36524f98d8b946627ffa965f40414c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e5a8f86688a1597cda5315b78dd43e1338cfc2459726b2250c4cb475d82b7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f30335dd90dfaaf54fbfe2df9616d2233e4ac4228c01789f00dd6eccd8024432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13029585ff1f5c52b084918548bce86be7175cf6943424fba61332fdbb03b562\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a76e634de19ec5da961557ecbb40b62314b5d372e680579b53cbaace93c33372\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ee85e74fafdd19fef6707b96f135e62d6606eeb8a1dac2af4840a28009933c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.273787 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b34cd5aa-e234-4132-a206-ee911234e4fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:53:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d421b97e5f8a27808a726111b6512ca6beb22600f7ce6b0d6b181c0c9a94c269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2175d560bd3b49088520c674e6668143955bdbeb0c8fc99c8186146ab4b733e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75205c7a7ad7b74b6a4c04b1f29c57d66e8899e41700cff45fbdcbc162a251f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:53:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b4eb38ce03b43343ca683fe663080605983826663826e16ba88e03ef1501b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T09:53:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T09:53:53Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:53:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.289790 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xj5gd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4e8dfa0-1849-457a-b564-4f77e534a7e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T09:55:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66392d6e395aa6ef33d94595eb5b6670f9205bc5591c35db295b8e29d84c7c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T09:54:59Z\\\",\\\"message\\\":\\\"2026-01-27T09:54:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c708bd02-e8af-4686-84a1-1c9b692d637a\\\\n2026-01-27T09:54:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c708bd02-e8af-4686-84a1-1c9b692d637a to /host/opt/cni/bin/\\\\n2026-01-27T09:54:14Z [verbose] multus-daemon started\\\\n2026-01-27T09:54:14Z [verbose] Readiness Indicator file check\\\\n2026-01-27T09:54:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T09:54:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T09:54:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsxp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T09:54:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xj5gd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T09:55:22Z is after 2025-08-24T17:21:41Z" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.323509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.323545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.323562 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.323578 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.323588 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:22Z","lastTransitionTime":"2026-01-27T09:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.425611 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.425655 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.425665 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.425682 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.425693 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:22Z","lastTransitionTime":"2026-01-27T09:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.527722 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.527758 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.527768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.527782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.527792 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:22Z","lastTransitionTime":"2026-01-27T09:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.629997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.630040 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.630051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.630067 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.630079 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:22Z","lastTransitionTime":"2026-01-27T09:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.732582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.732627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.732643 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.732666 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.732677 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:22Z","lastTransitionTime":"2026-01-27T09:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.834477 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.834508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.834519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.834533 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.834543 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:22Z","lastTransitionTime":"2026-01-27T09:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.937043 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.937086 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.937097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.937115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:22 crc kubenswrapper[4869]: I0127 09:55:22.937125 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:22Z","lastTransitionTime":"2026-01-27T09:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.033870 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:32:01.459886172 +0000 UTC Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.039659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.039696 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.039714 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.039734 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.039749 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:23Z","lastTransitionTime":"2026-01-27T09:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.142278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.142324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.142332 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.142346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.142355 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:23Z","lastTransitionTime":"2026-01-27T09:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.244420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.244449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.244457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.244469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.244477 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:23Z","lastTransitionTime":"2026-01-27T09:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.346979 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.347015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.347025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.347041 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.347049 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:23Z","lastTransitionTime":"2026-01-27T09:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.448635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.448670 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.448687 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.448704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.448714 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:23Z","lastTransitionTime":"2026-01-27T09:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.551507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.551588 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.551611 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.551640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.551657 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:23Z","lastTransitionTime":"2026-01-27T09:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.653822 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.653883 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.653895 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.653911 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.653923 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:23Z","lastTransitionTime":"2026-01-27T09:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.755998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.756047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.756058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.756079 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.756090 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:23Z","lastTransitionTime":"2026-01-27T09:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.858131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.858175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.858190 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.858210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.858226 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:23Z","lastTransitionTime":"2026-01-27T09:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.960604 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.960636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.960647 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.960662 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:23 crc kubenswrapper[4869]: I0127 09:55:23.960672 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:23Z","lastTransitionTime":"2026-01-27T09:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.033366 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.033501 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:24 crc kubenswrapper[4869]: E0127 09:55:24.033585 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.033636 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:24 crc kubenswrapper[4869]: E0127 09:55:24.033726 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.033803 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:24 crc kubenswrapper[4869]: E0127 09:55:24.033966 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.034041 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:00:00.775871991 +0000 UTC Jan 27 09:55:24 crc kubenswrapper[4869]: E0127 09:55:24.034082 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.062912 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.062942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.062950 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.062969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.062977 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:24Z","lastTransitionTime":"2026-01-27T09:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.166971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.167032 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.167049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.167071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.167088 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:24Z","lastTransitionTime":"2026-01-27T09:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.273010 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.273090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.273100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.273115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.273124 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:24Z","lastTransitionTime":"2026-01-27T09:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.375504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.375536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.375544 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.375558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.375567 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:24Z","lastTransitionTime":"2026-01-27T09:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.478141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.478239 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.478261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.478289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.478310 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:24Z","lastTransitionTime":"2026-01-27T09:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.581084 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.581125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.581137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.581169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.581181 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:24Z","lastTransitionTime":"2026-01-27T09:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.684108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.684176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.684186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.684200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.684210 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:24Z","lastTransitionTime":"2026-01-27T09:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.786785 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.786827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.786853 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.786867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.786876 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:24Z","lastTransitionTime":"2026-01-27T09:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.889780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.889818 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.889853 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.889869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.889879 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:24Z","lastTransitionTime":"2026-01-27T09:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.992635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.992690 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.992702 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.992719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:24 crc kubenswrapper[4869]: I0127 09:55:24.992733 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:24Z","lastTransitionTime":"2026-01-27T09:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.034135 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 05:36:52.04855483 +0000 UTC Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.095358 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.095388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.095396 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.095409 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.095417 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:25Z","lastTransitionTime":"2026-01-27T09:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.198418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.198491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.198514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.198538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.198555 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:25Z","lastTransitionTime":"2026-01-27T09:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.301406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.301451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.301461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.301477 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.301486 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:25Z","lastTransitionTime":"2026-01-27T09:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.404440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.404479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.404487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.404503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.404512 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:25Z","lastTransitionTime":"2026-01-27T09:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.507815 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.507897 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.507914 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.507937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.507953 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:25Z","lastTransitionTime":"2026-01-27T09:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.609382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.609428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.609441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.609458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.609470 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:25Z","lastTransitionTime":"2026-01-27T09:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.711793 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.711850 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.711864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.711883 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.711892 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:25Z","lastTransitionTime":"2026-01-27T09:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.814235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.814271 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.814282 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.814297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.814306 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:25Z","lastTransitionTime":"2026-01-27T09:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.917006 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.917076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.917097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.917124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:25 crc kubenswrapper[4869]: I0127 09:55:25.917149 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:25Z","lastTransitionTime":"2026-01-27T09:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.019534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.019616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.019646 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.019671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.019692 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:26Z","lastTransitionTime":"2026-01-27T09:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.032975 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.033038 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:26 crc kubenswrapper[4869]: E0127 09:55:26.033105 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.033186 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.033398 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:26 crc kubenswrapper[4869]: E0127 09:55:26.033526 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:26 crc kubenswrapper[4869]: E0127 09:55:26.033627 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:26 crc kubenswrapper[4869]: E0127 09:55:26.033761 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.034357 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:35:44.119570912 +0000 UTC Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.122416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.122478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.122490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.122503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.122512 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:26Z","lastTransitionTime":"2026-01-27T09:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.225278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.225327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.225338 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.225356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.225369 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:26Z","lastTransitionTime":"2026-01-27T09:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.328113 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.328176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.328198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.328218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.328236 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:26Z","lastTransitionTime":"2026-01-27T09:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.430861 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.430921 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.430936 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.430956 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.430972 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:26Z","lastTransitionTime":"2026-01-27T09:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.533818 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.533878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.533893 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.533917 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.533931 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:26Z","lastTransitionTime":"2026-01-27T09:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.636180 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.636224 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.636235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.636251 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.636262 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:26Z","lastTransitionTime":"2026-01-27T09:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.739172 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.739200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.739207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.739219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.739230 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:26Z","lastTransitionTime":"2026-01-27T09:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.840998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.841024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.841034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.841052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.841069 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:26Z","lastTransitionTime":"2026-01-27T09:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.944724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.944771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.944781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.944801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:26 crc kubenswrapper[4869]: I0127 09:55:26.944812 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:26Z","lastTransitionTime":"2026-01-27T09:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.035158 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 00:21:21.906173907 +0000 UTC Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.047348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.047384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.047438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.047459 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.047470 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:27Z","lastTransitionTime":"2026-01-27T09:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.150245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.150318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.150339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.150388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.150413 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:27Z","lastTransitionTime":"2026-01-27T09:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.254157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.254239 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.254259 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.254282 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.254300 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:27Z","lastTransitionTime":"2026-01-27T09:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.356472 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.356512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.356522 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.356537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.356549 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:27Z","lastTransitionTime":"2026-01-27T09:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.459933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.460005 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.460028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.460057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.460079 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:27Z","lastTransitionTime":"2026-01-27T09:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.562929 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.562961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.562971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.562985 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.562994 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:27Z","lastTransitionTime":"2026-01-27T09:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.665577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.665650 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.665664 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.665705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.665718 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:27Z","lastTransitionTime":"2026-01-27T09:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.768110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.768145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.768153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.768168 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.768177 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:27Z","lastTransitionTime":"2026-01-27T09:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.871003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.871048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.871060 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.871078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.871090 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:27Z","lastTransitionTime":"2026-01-27T09:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.974388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.974430 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.974438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.974452 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:27 crc kubenswrapper[4869]: I0127 09:55:27.974461 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:27Z","lastTransitionTime":"2026-01-27T09:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.032371 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.032427 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:28 crc kubenswrapper[4869]: E0127 09:55:28.032494 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.032507 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:28 crc kubenswrapper[4869]: E0127 09:55:28.032588 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:28 crc kubenswrapper[4869]: E0127 09:55:28.032704 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.032774 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:28 crc kubenswrapper[4869]: E0127 09:55:28.032908 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.035353 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 07:15:48.172092616 +0000 UTC Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.076718 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.076781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.076793 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.076809 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.076820 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:28Z","lastTransitionTime":"2026-01-27T09:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.180434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.180481 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.180492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.180509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.180523 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:28Z","lastTransitionTime":"2026-01-27T09:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.282641 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.282671 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.282679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.282691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.282716 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:28Z","lastTransitionTime":"2026-01-27T09:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.385209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.385257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.385270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.385287 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.385296 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:28Z","lastTransitionTime":"2026-01-27T09:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.487670 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.487710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.487720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.487736 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.487748 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:28Z","lastTransitionTime":"2026-01-27T09:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.589813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.589869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.589881 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.589894 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.589902 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:28Z","lastTransitionTime":"2026-01-27T09:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.692334 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.692373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.692382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.692399 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.692409 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:28Z","lastTransitionTime":"2026-01-27T09:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.795143 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.795206 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.795221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.795241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.795254 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:28Z","lastTransitionTime":"2026-01-27T09:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.897244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.897308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.897321 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.897338 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.897370 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:28Z","lastTransitionTime":"2026-01-27T09:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.999489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.999534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.999543 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.999558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:28 crc kubenswrapper[4869]: I0127 09:55:28.999567 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:28Z","lastTransitionTime":"2026-01-27T09:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.036189 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:38:04.243251782 +0000 UTC Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.102182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.102231 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.102242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.102261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.102273 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:29Z","lastTransitionTime":"2026-01-27T09:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.205997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.206085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.206110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.206141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.206164 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:29Z","lastTransitionTime":"2026-01-27T09:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.308358 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.308400 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.308411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.308426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.308436 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:29Z","lastTransitionTime":"2026-01-27T09:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.410917 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.410974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.410984 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.410996 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.411005 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:29Z","lastTransitionTime":"2026-01-27T09:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.513663 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.513695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.513705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.513719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.513728 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:29Z","lastTransitionTime":"2026-01-27T09:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.615738 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.615778 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.615789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.615804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.615816 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:29Z","lastTransitionTime":"2026-01-27T09:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.718951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.719019 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.719037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.719061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.719078 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:29Z","lastTransitionTime":"2026-01-27T09:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.784674 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs\") pod \"network-metrics-daemon-p5frm\" (UID: \"0bf72cba-f163-4dc2-b157-cfeb56d0177b\") " pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:29 crc kubenswrapper[4869]: E0127 09:55:29.784955 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 09:55:29 crc kubenswrapper[4869]: E0127 09:55:29.785045 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs podName:0bf72cba-f163-4dc2-b157-cfeb56d0177b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:33.785021033 +0000 UTC m=+162.405445146 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs") pod "network-metrics-daemon-p5frm" (UID: "0bf72cba-f163-4dc2-b157-cfeb56d0177b") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.827214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.827319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.827343 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.827372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.827391 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:29Z","lastTransitionTime":"2026-01-27T09:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.931020 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.931128 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.931152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.931183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:29 crc kubenswrapper[4869]: I0127 09:55:29.931206 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:29Z","lastTransitionTime":"2026-01-27T09:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.032259 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.032452 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.032486 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.032449 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:30 crc kubenswrapper[4869]: E0127 09:55:30.032656 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:30 crc kubenswrapper[4869]: E0127 09:55:30.032806 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:30 crc kubenswrapper[4869]: E0127 09:55:30.033367 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:30 crc kubenswrapper[4869]: E0127 09:55:30.033640 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.034297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.034340 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.034358 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.034382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.034402 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:30Z","lastTransitionTime":"2026-01-27T09:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.036425 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 23:03:27.621961834 +0000 UTC Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.138495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.138545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.138554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.138570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.138580 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:30Z","lastTransitionTime":"2026-01-27T09:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.242128 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.242200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.242223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.242255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.242278 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:30Z","lastTransitionTime":"2026-01-27T09:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.345720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.345790 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.345807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.346009 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.346035 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:30Z","lastTransitionTime":"2026-01-27T09:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.449574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.449660 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.449683 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.449713 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.449735 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:30Z","lastTransitionTime":"2026-01-27T09:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.552140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.552181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.552195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.552212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.552223 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:30Z","lastTransitionTime":"2026-01-27T09:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.654971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.655041 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.655061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.655085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.655102 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:30Z","lastTransitionTime":"2026-01-27T09:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.668138 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.668180 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.668191 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.668205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.668214 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T09:55:30Z","lastTransitionTime":"2026-01-27T09:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.732119 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89"] Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.732461 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.735743 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.736418 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.736714 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.738910 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.750758 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.750736535 podStartE2EDuration="45.750736535s" podCreationTimestamp="2026-01-27 09:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:55:30.750651873 +0000 UTC m=+99.371075996" watchObservedRunningTime="2026-01-27 09:55:30.750736535 +0000 UTC m=+99.371160638" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.797555 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-xj5gd" podStartSLOduration=79.797531459 podStartE2EDuration="1m19.797531459s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:55:30.766498441 +0000 UTC m=+99.386922554" watchObservedRunningTime="2026-01-27 09:55:30.797531459 +0000 UTC m=+99.417955582" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.862612 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=78.862596185 podStartE2EDuration="1m18.862596185s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:55:30.846048235 +0000 UTC m=+99.466472338" watchObservedRunningTime="2026-01-27 09:55:30.862596185 +0000 UTC m=+99.483020268" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.886146 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bgt4x" podStartSLOduration=79.886121851 podStartE2EDuration="1m19.886121851s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:55:30.886122521 +0000 UTC m=+99.506546624" watchObservedRunningTime="2026-01-27 09:55:30.886121851 +0000 UTC m=+99.506545944" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.900011 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/53ecf428-abfb-45b0-8e9e-eddac8693ec2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wtc89\" (UID: \"53ecf428-abfb-45b0-8e9e-eddac8693ec2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.900073 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/53ecf428-abfb-45b0-8e9e-eddac8693ec2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wtc89\" (UID: \"53ecf428-abfb-45b0-8e9e-eddac8693ec2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.900118 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53ecf428-abfb-45b0-8e9e-eddac8693ec2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wtc89\" (UID: \"53ecf428-abfb-45b0-8e9e-eddac8693ec2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.900148 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ecf428-abfb-45b0-8e9e-eddac8693ec2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wtc89\" (UID: \"53ecf428-abfb-45b0-8e9e-eddac8693ec2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.900284 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/53ecf428-abfb-45b0-8e9e-eddac8693ec2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wtc89\" (UID: \"53ecf428-abfb-45b0-8e9e-eddac8693ec2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.903100 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-9pfwk" podStartSLOduration=79.903085913 podStartE2EDuration="1m19.903085913s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:55:30.902785014 +0000 UTC m=+99.523209137" watchObservedRunningTime="2026-01-27 09:55:30.903085913 +0000 UTC m=+99.523510006" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.916300 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-bv4rq" podStartSLOduration=79.916278973 podStartE2EDuration="1m19.916278973s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:55:30.915788288 +0000 UTC m=+99.536212391" watchObservedRunningTime="2026-01-27 09:55:30.916278973 +0000 UTC m=+99.536703076" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.928330 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=71.928308219 podStartE2EDuration="1m11.928308219s" podCreationTimestamp="2026-01-27 09:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:55:30.927573187 +0000 UTC m=+99.547997300" watchObservedRunningTime="2026-01-27 09:55:30.928308219 +0000 UTC m=+99.548732312" Jan 27 09:55:30 crc kubenswrapper[4869]: I0127 09:55:30.995307 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podStartSLOduration=79.995286611 podStartE2EDuration="1m19.995286611s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:55:30.982532883 +0000 UTC m=+99.602956966" watchObservedRunningTime="2026-01-27 09:55:30.995286611 +0000 UTC m=+99.615710704" Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.001381 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/53ecf428-abfb-45b0-8e9e-eddac8693ec2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wtc89\" (UID: \"53ecf428-abfb-45b0-8e9e-eddac8693ec2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.001421 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ecf428-abfb-45b0-8e9e-eddac8693ec2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wtc89\" (UID: \"53ecf428-abfb-45b0-8e9e-eddac8693ec2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.001448 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53ecf428-abfb-45b0-8e9e-eddac8693ec2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wtc89\" (UID: \"53ecf428-abfb-45b0-8e9e-eddac8693ec2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.001492 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/53ecf428-abfb-45b0-8e9e-eddac8693ec2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wtc89\" (UID: \"53ecf428-abfb-45b0-8e9e-eddac8693ec2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.001555 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/53ecf428-abfb-45b0-8e9e-eddac8693ec2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wtc89\" (UID: \"53ecf428-abfb-45b0-8e9e-eddac8693ec2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.001603 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/53ecf428-abfb-45b0-8e9e-eddac8693ec2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wtc89\" (UID: \"53ecf428-abfb-45b0-8e9e-eddac8693ec2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.002294 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/53ecf428-abfb-45b0-8e9e-eddac8693ec2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wtc89\" (UID: \"53ecf428-abfb-45b0-8e9e-eddac8693ec2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.002894 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/53ecf428-abfb-45b0-8e9e-eddac8693ec2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wtc89\" (UID: \"53ecf428-abfb-45b0-8e9e-eddac8693ec2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.014705 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53ecf428-abfb-45b0-8e9e-eddac8693ec2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wtc89\" (UID: \"53ecf428-abfb-45b0-8e9e-eddac8693ec2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.014709 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xqf8x" podStartSLOduration=79.014691335 podStartE2EDuration="1m19.014691335s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:55:30.995714254 +0000 UTC m=+99.616138357" watchObservedRunningTime="2026-01-27 09:55:31.014691335 +0000 UTC m=+99.635115418" Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.022491 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53ecf428-abfb-45b0-8e9e-eddac8693ec2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wtc89\" (UID: \"53ecf428-abfb-45b0-8e9e-eddac8693ec2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.033340 4869 scope.go:117] "RemoveContainer" containerID="2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690" Jan 27 09:55:31 crc kubenswrapper[4869]: E0127 09:55:31.033605 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-45hzs_openshift-ovn-kubernetes(8d38c693-da40-464a-9822-f98fb1b5ca35)\"" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.034847 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=21.034820161 podStartE2EDuration="21.034820161s" podCreationTimestamp="2026-01-27 09:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:55:31.02567677 +0000 UTC m=+99.646100853" watchObservedRunningTime="2026-01-27 09:55:31.034820161 +0000 UTC m=+99.655244244" Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.037011 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:08:12.858742158 +0000 UTC Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.037062 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.043540 4869 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.050133 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.050115543 podStartE2EDuration="1m18.050115543s" podCreationTimestamp="2026-01-27 09:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:55:31.048810705 +0000 UTC m=+99.669234788" watchObservedRunningTime="2026-01-27 09:55:31.050115543 +0000 UTC m=+99.670539626" Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.051463 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.611537 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" event={"ID":"53ecf428-abfb-45b0-8e9e-eddac8693ec2","Type":"ContainerStarted","Data":"2c5691ac4ed63684b8145289304b9e41fa62eb4d7d6b6e8f66837cc8f05f690d"} Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.611869 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" event={"ID":"53ecf428-abfb-45b0-8e9e-eddac8693ec2","Type":"ContainerStarted","Data":"79bd178efa62060ae370b81bb3ab60b36213ae624e67d0b45788e3c77d194200"} Jan 27 09:55:31 crc kubenswrapper[4869]: I0127 09:55:31.628360 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wtc89" podStartSLOduration=80.628339083 podStartE2EDuration="1m20.628339083s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:55:31.627359094 +0000 UTC m=+100.247783207" watchObservedRunningTime="2026-01-27 09:55:31.628339083 +0000 UTC m=+100.248763166" Jan 27 09:55:32 crc kubenswrapper[4869]: I0127 09:55:32.032102 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:32 crc kubenswrapper[4869]: I0127 09:55:32.032187 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:32 crc kubenswrapper[4869]: E0127 09:55:32.032230 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:32 crc kubenswrapper[4869]: I0127 09:55:32.032299 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:32 crc kubenswrapper[4869]: I0127 09:55:32.032349 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:32 crc kubenswrapper[4869]: E0127 09:55:32.032445 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:32 crc kubenswrapper[4869]: E0127 09:55:32.035219 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:32 crc kubenswrapper[4869]: E0127 09:55:32.035297 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:34 crc kubenswrapper[4869]: I0127 09:55:34.033010 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:34 crc kubenswrapper[4869]: I0127 09:55:34.033034 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:34 crc kubenswrapper[4869]: I0127 09:55:34.033153 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:34 crc kubenswrapper[4869]: I0127 09:55:34.033221 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:34 crc kubenswrapper[4869]: E0127 09:55:34.033497 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:34 crc kubenswrapper[4869]: E0127 09:55:34.033793 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:34 crc kubenswrapper[4869]: E0127 09:55:34.033933 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:34 crc kubenswrapper[4869]: E0127 09:55:34.034039 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:36 crc kubenswrapper[4869]: I0127 09:55:36.032573 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:36 crc kubenswrapper[4869]: I0127 09:55:36.032625 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:36 crc kubenswrapper[4869]: I0127 09:55:36.032656 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:36 crc kubenswrapper[4869]: I0127 09:55:36.032667 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:36 crc kubenswrapper[4869]: E0127 09:55:36.034158 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:36 crc kubenswrapper[4869]: E0127 09:55:36.034226 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:36 crc kubenswrapper[4869]: E0127 09:55:36.034281 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:36 crc kubenswrapper[4869]: E0127 09:55:36.034340 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:38 crc kubenswrapper[4869]: I0127 09:55:38.032711 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:38 crc kubenswrapper[4869]: I0127 09:55:38.032751 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:38 crc kubenswrapper[4869]: I0127 09:55:38.032867 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:38 crc kubenswrapper[4869]: I0127 09:55:38.032881 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:38 crc kubenswrapper[4869]: E0127 09:55:38.032975 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:38 crc kubenswrapper[4869]: E0127 09:55:38.033049 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:38 crc kubenswrapper[4869]: E0127 09:55:38.033175 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:38 crc kubenswrapper[4869]: E0127 09:55:38.033283 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:40 crc kubenswrapper[4869]: I0127 09:55:40.032491 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:40 crc kubenswrapper[4869]: I0127 09:55:40.032558 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:40 crc kubenswrapper[4869]: I0127 09:55:40.032491 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:40 crc kubenswrapper[4869]: E0127 09:55:40.032784 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:40 crc kubenswrapper[4869]: E0127 09:55:40.032859 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:40 crc kubenswrapper[4869]: I0127 09:55:40.032558 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:40 crc kubenswrapper[4869]: E0127 09:55:40.033059 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:40 crc kubenswrapper[4869]: E0127 09:55:40.033178 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:42 crc kubenswrapper[4869]: I0127 09:55:42.032954 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:42 crc kubenswrapper[4869]: I0127 09:55:42.032987 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:42 crc kubenswrapper[4869]: I0127 09:55:42.033074 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:42 crc kubenswrapper[4869]: I0127 09:55:42.033069 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:42 crc kubenswrapper[4869]: E0127 09:55:42.034332 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:42 crc kubenswrapper[4869]: E0127 09:55:42.034493 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:42 crc kubenswrapper[4869]: E0127 09:55:42.034647 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:42 crc kubenswrapper[4869]: E0127 09:55:42.034712 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:44 crc kubenswrapper[4869]: I0127 09:55:44.033214 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:44 crc kubenswrapper[4869]: I0127 09:55:44.033214 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:44 crc kubenswrapper[4869]: I0127 09:55:44.033276 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:44 crc kubenswrapper[4869]: I0127 09:55:44.034592 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:44 crc kubenswrapper[4869]: E0127 09:55:44.034794 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:44 crc kubenswrapper[4869]: E0127 09:55:44.035210 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:44 crc kubenswrapper[4869]: E0127 09:55:44.035667 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:44 crc kubenswrapper[4869]: E0127 09:55:44.035780 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:45 crc kubenswrapper[4869]: I0127 09:55:45.034781 4869 scope.go:117] "RemoveContainer" containerID="2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690" Jan 27 09:55:45 crc kubenswrapper[4869]: E0127 09:55:45.035168 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-45hzs_openshift-ovn-kubernetes(8d38c693-da40-464a-9822-f98fb1b5ca35)\"" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" Jan 27 09:55:45 crc kubenswrapper[4869]: I0127 09:55:45.656210 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xj5gd_c4e8dfa0-1849-457a-b564-4f77e534a7e0/kube-multus/1.log" Jan 27 09:55:45 crc kubenswrapper[4869]: I0127 09:55:45.657183 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xj5gd_c4e8dfa0-1849-457a-b564-4f77e534a7e0/kube-multus/0.log" Jan 27 09:55:45 crc kubenswrapper[4869]: I0127 09:55:45.657365 4869 generic.go:334] "Generic (PLEG): container finished" podID="c4e8dfa0-1849-457a-b564-4f77e534a7e0" containerID="66392d6e395aa6ef33d94595eb5b6670f9205bc5591c35db295b8e29d84c7c63" exitCode=1 Jan 27 09:55:45 crc kubenswrapper[4869]: I0127 09:55:45.657413 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xj5gd" event={"ID":"c4e8dfa0-1849-457a-b564-4f77e534a7e0","Type":"ContainerDied","Data":"66392d6e395aa6ef33d94595eb5b6670f9205bc5591c35db295b8e29d84c7c63"} Jan 27 09:55:45 crc kubenswrapper[4869]: I0127 09:55:45.657657 4869 scope.go:117] "RemoveContainer" containerID="510f7586287ae24f1f376e5ffb136dc6878cd91295815cc5715be13d4ee02a4a" Jan 27 09:55:45 crc kubenswrapper[4869]: I0127 09:55:45.658234 4869 scope.go:117] "RemoveContainer" containerID="66392d6e395aa6ef33d94595eb5b6670f9205bc5591c35db295b8e29d84c7c63" Jan 27 09:55:45 crc kubenswrapper[4869]: E0127 09:55:45.658552 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-xj5gd_openshift-multus(c4e8dfa0-1849-457a-b564-4f77e534a7e0)\"" pod="openshift-multus/multus-xj5gd" podUID="c4e8dfa0-1849-457a-b564-4f77e534a7e0" Jan 27 09:55:46 crc kubenswrapper[4869]: I0127 09:55:46.032660 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:46 crc kubenswrapper[4869]: I0127 09:55:46.032713 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:46 crc kubenswrapper[4869]: E0127 09:55:46.032773 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:46 crc kubenswrapper[4869]: E0127 09:55:46.032850 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:46 crc kubenswrapper[4869]: I0127 09:55:46.033360 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:46 crc kubenswrapper[4869]: I0127 09:55:46.033408 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:46 crc kubenswrapper[4869]: E0127 09:55:46.033671 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:46 crc kubenswrapper[4869]: E0127 09:55:46.033801 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:46 crc kubenswrapper[4869]: I0127 09:55:46.663276 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xj5gd_c4e8dfa0-1849-457a-b564-4f77e534a7e0/kube-multus/1.log" Jan 27 09:55:48 crc kubenswrapper[4869]: I0127 09:55:48.032615 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:48 crc kubenswrapper[4869]: I0127 09:55:48.032665 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:48 crc kubenswrapper[4869]: E0127 09:55:48.032747 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:48 crc kubenswrapper[4869]: E0127 09:55:48.032909 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:48 crc kubenswrapper[4869]: I0127 09:55:48.032952 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:48 crc kubenswrapper[4869]: I0127 09:55:48.032962 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:48 crc kubenswrapper[4869]: E0127 09:55:48.033193 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:48 crc kubenswrapper[4869]: E0127 09:55:48.033344 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:50 crc kubenswrapper[4869]: I0127 09:55:50.032906 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:50 crc kubenswrapper[4869]: I0127 09:55:50.032991 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:50 crc kubenswrapper[4869]: I0127 09:55:50.032992 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:50 crc kubenswrapper[4869]: E0127 09:55:50.033077 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:50 crc kubenswrapper[4869]: I0127 09:55:50.033316 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:50 crc kubenswrapper[4869]: E0127 09:55:50.033311 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:50 crc kubenswrapper[4869]: E0127 09:55:50.033363 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:50 crc kubenswrapper[4869]: E0127 09:55:50.033407 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:52 crc kubenswrapper[4869]: I0127 09:55:52.032794 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:52 crc kubenswrapper[4869]: I0127 09:55:52.032802 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:52 crc kubenswrapper[4869]: E0127 09:55:52.035644 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:52 crc kubenswrapper[4869]: I0127 09:55:52.035676 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:52 crc kubenswrapper[4869]: E0127 09:55:52.036242 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:52 crc kubenswrapper[4869]: E0127 09:55:52.035866 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:52 crc kubenswrapper[4869]: I0127 09:55:52.035718 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:52 crc kubenswrapper[4869]: E0127 09:55:52.037036 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:52 crc kubenswrapper[4869]: E0127 09:55:52.075316 4869 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 09:55:52 crc kubenswrapper[4869]: E0127 09:55:52.133659 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 09:55:54 crc kubenswrapper[4869]: I0127 09:55:54.032613 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:54 crc kubenswrapper[4869]: I0127 09:55:54.032757 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:54 crc kubenswrapper[4869]: I0127 09:55:54.033877 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:54 crc kubenswrapper[4869]: E0127 09:55:54.033937 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:54 crc kubenswrapper[4869]: E0127 09:55:54.034380 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:54 crc kubenswrapper[4869]: E0127 09:55:54.034472 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:54 crc kubenswrapper[4869]: I0127 09:55:54.034785 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:54 crc kubenswrapper[4869]: E0127 09:55:54.035067 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:56 crc kubenswrapper[4869]: I0127 09:55:56.032205 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:56 crc kubenswrapper[4869]: E0127 09:55:56.032374 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:56 crc kubenswrapper[4869]: I0127 09:55:56.032620 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:56 crc kubenswrapper[4869]: E0127 09:55:56.032706 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:56 crc kubenswrapper[4869]: I0127 09:55:56.033160 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:56 crc kubenswrapper[4869]: I0127 09:55:56.033203 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:56 crc kubenswrapper[4869]: I0127 09:55:56.033412 4869 scope.go:117] "RemoveContainer" containerID="2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690" Jan 27 09:55:56 crc kubenswrapper[4869]: E0127 09:55:56.033628 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:55:56 crc kubenswrapper[4869]: E0127 09:55:56.033870 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:56 crc kubenswrapper[4869]: I0127 09:55:56.696243 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovnkube-controller/3.log" Jan 27 09:55:56 crc kubenswrapper[4869]: I0127 09:55:56.703272 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerStarted","Data":"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a"} Jan 27 09:55:56 crc kubenswrapper[4869]: I0127 09:55:56.703672 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:55:56 crc kubenswrapper[4869]: I0127 09:55:56.807362 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" podStartSLOduration=105.806822132 podStartE2EDuration="1m45.806822132s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:55:56.737905933 +0000 UTC m=+125.358330026" watchObservedRunningTime="2026-01-27 09:55:56.806822132 +0000 UTC m=+125.427246215" Jan 27 09:55:56 crc kubenswrapper[4869]: I0127 09:55:56.807810 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-p5frm"] Jan 27 09:55:56 crc kubenswrapper[4869]: I0127 09:55:56.807922 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:56 crc kubenswrapper[4869]: E0127 09:55:56.807996 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:57 crc kubenswrapper[4869]: E0127 09:55:57.135120 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 09:55:58 crc kubenswrapper[4869]: I0127 09:55:58.032514 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:55:58 crc kubenswrapper[4869]: I0127 09:55:58.032622 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:55:58 crc kubenswrapper[4869]: I0127 09:55:58.032635 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:55:58 crc kubenswrapper[4869]: E0127 09:55:58.032748 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:55:58 crc kubenswrapper[4869]: I0127 09:55:58.032790 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:55:58 crc kubenswrapper[4869]: E0127 09:55:58.033037 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:55:58 crc kubenswrapper[4869]: E0127 09:55:58.033242 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:55:58 crc kubenswrapper[4869]: E0127 09:55:58.033339 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:56:00 crc kubenswrapper[4869]: I0127 09:56:00.032586 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:56:00 crc kubenswrapper[4869]: I0127 09:56:00.032662 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:56:00 crc kubenswrapper[4869]: I0127 09:56:00.032664 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:56:00 crc kubenswrapper[4869]: E0127 09:56:00.032809 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:56:00 crc kubenswrapper[4869]: E0127 09:56:00.032990 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:56:00 crc kubenswrapper[4869]: I0127 09:56:00.033099 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:56:00 crc kubenswrapper[4869]: E0127 09:56:00.033138 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:56:00 crc kubenswrapper[4869]: E0127 09:56:00.033290 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:56:00 crc kubenswrapper[4869]: I0127 09:56:00.033460 4869 scope.go:117] "RemoveContainer" containerID="66392d6e395aa6ef33d94595eb5b6670f9205bc5591c35db295b8e29d84c7c63" Jan 27 09:56:00 crc kubenswrapper[4869]: I0127 09:56:00.717103 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xj5gd_c4e8dfa0-1849-457a-b564-4f77e534a7e0/kube-multus/1.log" Jan 27 09:56:00 crc kubenswrapper[4869]: I0127 09:56:00.717450 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xj5gd" event={"ID":"c4e8dfa0-1849-457a-b564-4f77e534a7e0","Type":"ContainerStarted","Data":"df9de8342d1f640ffd0f53a86a79843cfa53f0e870d9b0f7f8c5fa4f8f2b5342"} Jan 27 09:56:02 crc kubenswrapper[4869]: I0127 09:56:02.032397 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:56:02 crc kubenswrapper[4869]: I0127 09:56:02.032418 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:56:02 crc kubenswrapper[4869]: I0127 09:56:02.033553 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:56:02 crc kubenswrapper[4869]: E0127 09:56:02.033545 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 09:56:02 crc kubenswrapper[4869]: I0127 09:56:02.033621 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:56:02 crc kubenswrapper[4869]: E0127 09:56:02.033695 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p5frm" podUID="0bf72cba-f163-4dc2-b157-cfeb56d0177b" Jan 27 09:56:02 crc kubenswrapper[4869]: E0127 09:56:02.033761 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 09:56:02 crc kubenswrapper[4869]: E0127 09:56:02.033814 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 09:56:04 crc kubenswrapper[4869]: I0127 09:56:04.032609 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:56:04 crc kubenswrapper[4869]: I0127 09:56:04.032806 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:56:04 crc kubenswrapper[4869]: I0127 09:56:04.032878 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:56:04 crc kubenswrapper[4869]: I0127 09:56:04.032806 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:56:04 crc kubenswrapper[4869]: I0127 09:56:04.035853 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 09:56:04 crc kubenswrapper[4869]: I0127 09:56:04.036061 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 09:56:04 crc kubenswrapper[4869]: I0127 09:56:04.036483 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 09:56:04 crc kubenswrapper[4869]: I0127 09:56:04.038255 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 09:56:04 crc kubenswrapper[4869]: I0127 09:56:04.038320 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 09:56:04 crc kubenswrapper[4869]: I0127 09:56:04.038468 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 09:56:05 crc kubenswrapper[4869]: I0127 09:56:05.375870 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.319114 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.363131 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4sqz8"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.363770 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.364281 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.364684 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.364802 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.365758 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.366704 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-clff8"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.367343 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.367560 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.367682 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.367807 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-w8hng"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.368252 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.368778 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.369315 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.374485 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.374784 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.374861 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.375135 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.375292 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.375480 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.375543 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.375581 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.375883 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.375927 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.375961 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.376047 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.376126 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.376165 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.376186 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.376251 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.376729 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.376751 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.376802 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.376800 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.376926 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qf4jk"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.377076 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.377165 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.377371 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qf4jk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.379196 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.379391 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.379418 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.379456 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.379468 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.379516 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.379527 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.379525 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.379584 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.379597 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.379638 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.379669 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.379712 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.379771 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.379843 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.379890 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.380126 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.381140 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.382151 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.398870 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.398884 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.400702 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.400918 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.401024 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dwt4c"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.401178 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.401499 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thwpk"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.401819 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thwpk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.402020 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dwt4c" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.407741 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.408793 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.425484 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.432812 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.433330 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-config\") pod \"controller-manager-879f6c89f-4sqz8\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.433362 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7g4f\" (UniqueName: \"kubernetes.io/projected/6951bfc9-9908-4404-9000-cc243c35a314-kube-api-access-f7g4f\") pod \"controller-manager-879f6c89f-4sqz8\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.433380 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9jnc\" (UniqueName: \"kubernetes.io/projected/6962f915-2dbf-4aa0-8e97-79ccb1dc35de-kube-api-access-l9jnc\") pod \"machine-approver-56656f9798-v5prp\" (UID: \"6962f915-2dbf-4aa0-8e97-79ccb1dc35de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.433403 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6951bfc9-9908-4404-9000-cc243c35a314-serving-cert\") pod \"controller-manager-879f6c89f-4sqz8\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.433417 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6962f915-2dbf-4aa0-8e97-79ccb1dc35de-machine-approver-tls\") pod \"machine-approver-56656f9798-v5prp\" (UID: \"6962f915-2dbf-4aa0-8e97-79ccb1dc35de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.433440 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6962f915-2dbf-4aa0-8e97-79ccb1dc35de-auth-proxy-config\") pod \"machine-approver-56656f9798-v5prp\" (UID: \"6962f915-2dbf-4aa0-8e97-79ccb1dc35de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.433457 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-client-ca\") pod \"controller-manager-879f6c89f-4sqz8\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.433470 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6962f915-2dbf-4aa0-8e97-79ccb1dc35de-config\") pod \"machine-approver-56656f9798-v5prp\" (UID: \"6962f915-2dbf-4aa0-8e97-79ccb1dc35de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.433492 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4sqz8\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.433493 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qj9jg"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.433918 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qj9jg" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.434054 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.434274 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.434697 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.434849 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.435644 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.436105 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.436380 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.436743 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.437126 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.437271 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.444974 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.445191 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.445259 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.445543 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.445758 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.445892 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.446016 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.446127 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.446288 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.446423 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.447945 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rnv4g"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.448573 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.449530 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-dpsrp"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.450044 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dpsrp" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.450242 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gk8wd"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.450774 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gk8wd" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.456752 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.457009 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.457149 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.457272 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-q86c4"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.457515 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.457802 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.458007 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qvjjk"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.458264 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.458440 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.458568 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qvjjk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.458583 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.458727 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.459700 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.459887 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.460053 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.460175 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.460355 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.460564 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.460704 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.461947 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.462093 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-clff8"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.462129 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhjsx"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.462651 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhjsx" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.464689 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.466629 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.469948 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4sqz8"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.469994 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.470007 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jsrbp"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.470515 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.474711 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-smmkq"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.475233 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.475663 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-ffwjx"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.476121 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.476444 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-smmkq" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.476632 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.477276 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.477886 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.478136 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.478889 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.482868 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j9w4z"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.483411 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6f8sx"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.490286 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktdt7"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.497482 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktdt7" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.507214 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j9w4z" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.508671 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.511009 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534439 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05e9e31-f26d-4358-a644-796cd3fea7a8-config\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534489 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/01c8f5c5-8c83-43b2-9070-6b138b246718-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dwt4c\" (UID: \"01c8f5c5-8c83-43b2-9070-6b138b246718\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dwt4c" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534516 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534538 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534559 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4tvf\" (UniqueName: \"kubernetes.io/projected/0347f639-0210-4f2c-99de-915830c86a6d-kube-api-access-t4tvf\") pod \"console-operator-58897d9998-gk8wd\" (UID: \"0347f639-0210-4f2c-99de-915830c86a6d\") " pod="openshift-console-operator/console-operator-58897d9998-gk8wd" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534579 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b10a171e-2958-45c1-9a6d-c8c14a7a24ae-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qvjjk\" (UID: \"b10a171e-2958-45c1-9a6d-c8c14a7a24ae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qvjjk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534600 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e262a14-a507-44b4-8634-5f4854181f02-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bnbjg\" (UID: \"8e262a14-a507-44b4-8634-5f4854181f02\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534618 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0347f639-0210-4f2c-99de-915830c86a6d-config\") pod \"console-operator-58897d9998-gk8wd\" (UID: \"0347f639-0210-4f2c-99de-915830c86a6d\") " pod="openshift-console-operator/console-operator-58897d9998-gk8wd" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534637 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0347f639-0210-4f2c-99de-915830c86a6d-serving-cert\") pod \"console-operator-58897d9998-gk8wd\" (UID: \"0347f639-0210-4f2c-99de-915830c86a6d\") " pod="openshift-console-operator/console-operator-58897d9998-gk8wd" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534659 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89c725c4-90e8-4965-b48d-89f3d2771faf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534677 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b05e9e31-f26d-4358-a644-796cd3fea7a8-image-import-ca\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534695 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b05e9e31-f26d-4358-a644-796cd3fea7a8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534720 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534741 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/670d8b6b-95a2-4711-98db-3f71e295093b-images\") pod \"machine-api-operator-5694c8668f-clff8\" (UID: \"670d8b6b-95a2-4711-98db-3f71e295093b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534760 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b05e9e31-f26d-4358-a644-796cd3fea7a8-audit\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534778 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534797 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534816 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89c725c4-90e8-4965-b48d-89f3d2771faf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534864 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17cbc9af-17b4-4815-b527-9d9d9c5112fc-client-ca\") pod \"route-controller-manager-6576b87f9c-vwhlz\" (UID: \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534893 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-oauth-serving-cert\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534911 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b10a171e-2958-45c1-9a6d-c8c14a7a24ae-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qvjjk\" (UID: \"b10a171e-2958-45c1-9a6d-c8c14a7a24ae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qvjjk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534932 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tknxj\" (UniqueName: \"kubernetes.io/projected/b6851779-1393-4518-be8b-519296708bd7-kube-api-access-tknxj\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534952 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a39985a-ab91-430a-be02-8f2ac1399a37-config\") pod \"kube-apiserver-operator-766d6c64bb-zhjsx\" (UID: \"4a39985a-ab91-430a-be02-8f2ac1399a37\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhjsx" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534971 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b6851779-1393-4518-be8b-519296708bd7-console-oauth-config\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.534989 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-audit-dir\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.535013 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-config\") pod \"controller-manager-879f6c89f-4sqz8\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.535033 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b05e9e31-f26d-4358-a644-796cd3fea7a8-serving-cert\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.535052 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17cbc9af-17b4-4815-b527-9d9d9c5112fc-serving-cert\") pod \"route-controller-manager-6576b87f9c-vwhlz\" (UID: \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.535071 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/89c725c4-90e8-4965-b48d-89f3d2771faf-audit-policies\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.535090 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17cbc9af-17b4-4815-b527-9d9d9c5112fc-config\") pod \"route-controller-manager-6576b87f9c-vwhlz\" (UID: \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.535225 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9jnc\" (UniqueName: \"kubernetes.io/projected/6962f915-2dbf-4aa0-8e97-79ccb1dc35de-kube-api-access-l9jnc\") pod \"machine-approver-56656f9798-v5prp\" (UID: \"6962f915-2dbf-4aa0-8e97-79ccb1dc35de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.535363 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b05e9e31-f26d-4358-a644-796cd3fea7a8-encryption-config\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.535385 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7a44818f-a388-4dcb-93f4-b781c1f7bf16-metrics-tls\") pod \"dns-operator-744455d44c-qj9jg\" (UID: \"7a44818f-a388-4dcb-93f4-b781c1f7bf16\") " pod="openshift-dns-operator/dns-operator-744455d44c-qj9jg" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.535406 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks7jn\" (UniqueName: \"kubernetes.io/projected/8e262a14-a507-44b4-8634-5f4854181f02-kube-api-access-ks7jn\") pod \"cluster-image-registry-operator-dc59b4c8b-bnbjg\" (UID: \"8e262a14-a507-44b4-8634-5f4854181f02\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.535425 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89c725c4-90e8-4965-b48d-89f3d2771faf-serving-cert\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.535445 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlvsc\" (UniqueName: \"kubernetes.io/projected/89c725c4-90e8-4965-b48d-89f3d2771faf-kube-api-access-dlvsc\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.535466 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b05e9e31-f26d-4358-a644-796cd3fea7a8-etcd-client\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.535484 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e262a14-a507-44b4-8634-5f4854181f02-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bnbjg\" (UID: \"8e262a14-a507-44b4-8634-5f4854181f02\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.535504 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.536930 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-config\") pod \"controller-manager-879f6c89f-4sqz8\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.537293 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4sqz8\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.537371 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-console-config\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.537423 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6f8sx" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538410 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4sqz8\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538463 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538489 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcbq5\" (UniqueName: \"kubernetes.io/projected/493c38dc-c859-4715-b97f-be1388ee2162-kube-api-access-tcbq5\") pod \"downloads-7954f5f757-dpsrp\" (UID: \"493c38dc-c859-4715-b97f-be1388ee2162\") " pod="openshift-console/downloads-7954f5f757-dpsrp" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538521 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b05e9e31-f26d-4358-a644-796cd3fea7a8-node-pullsecrets\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538543 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89c725c4-90e8-4965-b48d-89f3d2771faf-etcd-client\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538559 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a39985a-ab91-430a-be02-8f2ac1399a37-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zhjsx\" (UID: \"4a39985a-ab91-430a-be02-8f2ac1399a37\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhjsx" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538583 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55adfb11-256b-4dd4-ba09-00ffd68f6e5e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qf4jk\" (UID: \"55adfb11-256b-4dd4-ba09-00ffd68f6e5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qf4jk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538611 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-service-ca\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538632 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538652 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/670d8b6b-95a2-4711-98db-3f71e295093b-config\") pod \"machine-api-operator-5694c8668f-clff8\" (UID: \"670d8b6b-95a2-4711-98db-3f71e295093b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538668 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pl9c\" (UniqueName: \"kubernetes.io/projected/670d8b6b-95a2-4711-98db-3f71e295093b-kube-api-access-9pl9c\") pod \"machine-api-operator-5694c8668f-clff8\" (UID: \"670d8b6b-95a2-4711-98db-3f71e295093b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538693 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8trwt\" (UniqueName: \"kubernetes.io/projected/6a00c17d-c0fe-49a3-921a-2c19dcea3274-kube-api-access-8trwt\") pod \"openshift-config-operator-7777fb866f-9rzwn\" (UID: \"6a00c17d-c0fe-49a3-921a-2c19dcea3274\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538718 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/670d8b6b-95a2-4711-98db-3f71e295093b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-clff8\" (UID: \"670d8b6b-95a2-4711-98db-3f71e295093b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538740 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b10a171e-2958-45c1-9a6d-c8c14a7a24ae-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qvjjk\" (UID: \"b10a171e-2958-45c1-9a6d-c8c14a7a24ae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qvjjk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538754 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89c725c4-90e8-4965-b48d-89f3d2771faf-encryption-config\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538789 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6a00c17d-c0fe-49a3-921a-2c19dcea3274-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9rzwn\" (UID: \"6a00c17d-c0fe-49a3-921a-2c19dcea3274\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538813 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a39985a-ab91-430a-be02-8f2ac1399a37-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zhjsx\" (UID: \"4a39985a-ab91-430a-be02-8f2ac1399a37\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhjsx" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538846 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnlgk\" (UniqueName: \"kubernetes.io/projected/17cbc9af-17b4-4815-b527-9d9d9c5112fc-kube-api-access-hnlgk\") pod \"route-controller-manager-6576b87f9c-vwhlz\" (UID: \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538870 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6851779-1393-4518-be8b-519296708bd7-console-serving-cert\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538884 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55adfb11-256b-4dd4-ba09-00ffd68f6e5e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qf4jk\" (UID: \"55adfb11-256b-4dd4-ba09-00ffd68f6e5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qf4jk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538925 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wcr9\" (UniqueName: \"kubernetes.io/projected/7a44818f-a388-4dcb-93f4-b781c1f7bf16-kube-api-access-7wcr9\") pod \"dns-operator-744455d44c-qj9jg\" (UID: \"7a44818f-a388-4dcb-93f4-b781c1f7bf16\") " pod="openshift-dns-operator/dns-operator-744455d44c-qj9jg" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538946 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e262a14-a507-44b4-8634-5f4854181f02-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bnbjg\" (UID: \"8e262a14-a507-44b4-8634-5f4854181f02\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538963 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.538978 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539015 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4gwz\" (UniqueName: \"kubernetes.io/projected/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-kube-api-access-b4gwz\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539032 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-trusted-ca-bundle\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539034 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539100 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539259 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539280 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539046 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0347f639-0210-4f2c-99de-915830c86a6d-trusted-ca\") pod \"console-operator-58897d9998-gk8wd\" (UID: \"0347f639-0210-4f2c-99de-915830c86a6d\") " pod="openshift-console-operator/console-operator-58897d9998-gk8wd" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539365 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539370 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqhn6\" (UniqueName: \"kubernetes.io/projected/121d9a3b-d369-4245-84ec-3efeb902ccd8-kube-api-access-nqhn6\") pod \"openshift-controller-manager-operator-756b6f6bc6-thwpk\" (UID: \"121d9a3b-d369-4245-84ec-3efeb902ccd8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thwpk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539391 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89c725c4-90e8-4965-b48d-89f3d2771faf-audit-dir\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539411 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qckk9\" (UniqueName: \"kubernetes.io/projected/55adfb11-256b-4dd4-ba09-00ffd68f6e5e-kube-api-access-qckk9\") pod \"openshift-apiserver-operator-796bbdcf4f-qf4jk\" (UID: \"55adfb11-256b-4dd4-ba09-00ffd68f6e5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qf4jk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539448 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7g4f\" (UniqueName: \"kubernetes.io/projected/6951bfc9-9908-4404-9000-cc243c35a314-kube-api-access-f7g4f\") pod \"controller-manager-879f6c89f-4sqz8\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539468 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs6jp\" (UniqueName: \"kubernetes.io/projected/b05e9e31-f26d-4358-a644-796cd3fea7a8-kube-api-access-rs6jp\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539548 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/121d9a3b-d369-4245-84ec-3efeb902ccd8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-thwpk\" (UID: \"121d9a3b-d369-4245-84ec-3efeb902ccd8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thwpk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539574 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6951bfc9-9908-4404-9000-cc243c35a314-serving-cert\") pod \"controller-manager-879f6c89f-4sqz8\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539591 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6962f915-2dbf-4aa0-8e97-79ccb1dc35de-machine-approver-tls\") pod \"machine-approver-56656f9798-v5prp\" (UID: \"6962f915-2dbf-4aa0-8e97-79ccb1dc35de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539607 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b05e9e31-f26d-4358-a644-796cd3fea7a8-audit-dir\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539622 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a00c17d-c0fe-49a3-921a-2c19dcea3274-serving-cert\") pod \"openshift-config-operator-7777fb866f-9rzwn\" (UID: \"6a00c17d-c0fe-49a3-921a-2c19dcea3274\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539654 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6962f915-2dbf-4aa0-8e97-79ccb1dc35de-auth-proxy-config\") pod \"machine-approver-56656f9798-v5prp\" (UID: \"6962f915-2dbf-4aa0-8e97-79ccb1dc35de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539671 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539548 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539593 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539799 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcmjp\" (UniqueName: \"kubernetes.io/projected/01c8f5c5-8c83-43b2-9070-6b138b246718-kube-api-access-vcmjp\") pod \"cluster-samples-operator-665b6dd947-dwt4c\" (UID: \"01c8f5c5-8c83-43b2-9070-6b138b246718\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dwt4c" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539638 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.539982 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-client-ca\") pod \"controller-manager-879f6c89f-4sqz8\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.540017 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6962f915-2dbf-4aa0-8e97-79ccb1dc35de-config\") pod \"machine-approver-56656f9798-v5prp\" (UID: \"6962f915-2dbf-4aa0-8e97-79ccb1dc35de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.540738 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-client-ca\") pod \"controller-manager-879f6c89f-4sqz8\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.540789 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b05e9e31-f26d-4358-a644-796cd3fea7a8-etcd-serving-ca\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.541272 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6962f915-2dbf-4aa0-8e97-79ccb1dc35de-config\") pod \"machine-approver-56656f9798-v5prp\" (UID: \"6962f915-2dbf-4aa0-8e97-79ccb1dc35de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.541326 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-audit-policies\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.541351 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/121d9a3b-d369-4245-84ec-3efeb902ccd8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-thwpk\" (UID: \"121d9a3b-d369-4245-84ec-3efeb902ccd8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thwpk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.543552 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6962f915-2dbf-4aa0-8e97-79ccb1dc35de-auth-proxy-config\") pod \"machine-approver-56656f9798-v5prp\" (UID: \"6962f915-2dbf-4aa0-8e97-79ccb1dc35de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.543567 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.544759 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hrflq"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.545711 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.545957 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6951bfc9-9908-4404-9000-cc243c35a314-serving-cert\") pod \"controller-manager-879f6c89f-4sqz8\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.546220 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qf4jk"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.546235 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.546266 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hrflq" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.546311 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.546628 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.547407 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6962f915-2dbf-4aa0-8e97-79ccb1dc35de-machine-approver-tls\") pod \"machine-approver-56656f9798-v5prp\" (UID: \"6962f915-2dbf-4aa0-8e97-79ccb1dc35de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.547507 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.548009 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.549164 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-w8hng"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.552041 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.552067 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dwt4c"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.552079 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dpsrp"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.553568 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.553669 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.554897 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2hgg7"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.555750 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2hgg7" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.555814 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jsrbp"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.556900 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qvjjk"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.557751 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thwpk"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.558929 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-frmph"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.559591 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-frmph" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.559816 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-prtqz"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.560173 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.560957 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.561625 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.563883 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kntj"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.564351 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.564999 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-d2ml5"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.565681 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-d2ml5" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.567662 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.568208 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.568373 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.568701 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.570015 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xkkb6"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.570473 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.570808 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.571005 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.571542 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhjsx"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.573355 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.582503 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gk8wd"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.592165 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-smmkq"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.592188 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rnv4g"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.594408 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.603570 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.603914 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.604208 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.604244 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-q86c4"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.604529 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.606888 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.609711 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.612029 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6f8sx"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.614315 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktdt7"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.617930 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.619644 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.621506 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.623899 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xkkb6"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.625131 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qj9jg"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.626916 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.627009 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-frmph"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.628999 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.631909 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hrflq"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.633680 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-q4j8x"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.635240 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-q4j8x" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.635971 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.637450 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2hgg7"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.638472 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jcj5k"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.640116 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kntj"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.640220 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.641020 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-prtqz"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.641937 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j9w4z"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.642349 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-oauth-serving-cert\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.642373 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b10a171e-2958-45c1-9a6d-c8c14a7a24ae-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qvjjk\" (UID: \"b10a171e-2958-45c1-9a6d-c8c14a7a24ae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qvjjk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.642394 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tknxj\" (UniqueName: \"kubernetes.io/projected/b6851779-1393-4518-be8b-519296708bd7-kube-api-access-tknxj\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.642871 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a39985a-ab91-430a-be02-8f2ac1399a37-config\") pod \"kube-apiserver-operator-766d6c64bb-zhjsx\" (UID: \"4a39985a-ab91-430a-be02-8f2ac1399a37\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhjsx" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.642910 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wncjw\" (UniqueName: \"kubernetes.io/projected/8b8af0be-d73b-4b8e-b7a2-295834553924-kube-api-access-wncjw\") pod \"migrator-59844c95c7-6f8sx\" (UID: \"8b8af0be-d73b-4b8e-b7a2-295834553924\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6f8sx" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.642932 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58af825f-df23-4365-bf18-1b2a0c2d143f-trusted-ca\") pod \"ingress-operator-5b745b69d9-vxds9\" (UID: \"58af825f-df23-4365-bf18-1b2a0c2d143f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.642954 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-audit-dir\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.643241 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-audit-dir\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.643338 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-oauth-serving-cert\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.643380 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-q4j8x"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.643935 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a39985a-ab91-430a-be02-8f2ac1399a37-config\") pod \"kube-apiserver-operator-766d6c64bb-zhjsx\" (UID: \"4a39985a-ab91-430a-be02-8f2ac1399a37\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhjsx" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.642973 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b6851779-1393-4518-be8b-519296708bd7-console-oauth-config\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644015 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b05e9e31-f26d-4358-a644-796cd3fea7a8-serving-cert\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644040 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17cbc9af-17b4-4815-b527-9d9d9c5112fc-serving-cert\") pod \"route-controller-manager-6576b87f9c-vwhlz\" (UID: \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644064 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b05e9e31-f26d-4358-a644-796cd3fea7a8-encryption-config\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644091 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7a44818f-a388-4dcb-93f4-b781c1f7bf16-metrics-tls\") pod \"dns-operator-744455d44c-qj9jg\" (UID: \"7a44818f-a388-4dcb-93f4-b781c1f7bf16\") " pod="openshift-dns-operator/dns-operator-744455d44c-qj9jg" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644116 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/89c725c4-90e8-4965-b48d-89f3d2771faf-audit-policies\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644137 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17cbc9af-17b4-4815-b527-9d9d9c5112fc-config\") pod \"route-controller-manager-6576b87f9c-vwhlz\" (UID: \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644164 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks7jn\" (UniqueName: \"kubernetes.io/projected/8e262a14-a507-44b4-8634-5f4854181f02-kube-api-access-ks7jn\") pod \"cluster-image-registry-operator-dc59b4c8b-bnbjg\" (UID: \"8e262a14-a507-44b4-8634-5f4854181f02\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644181 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e262a14-a507-44b4-8634-5f4854181f02-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bnbjg\" (UID: \"8e262a14-a507-44b4-8634-5f4854181f02\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644198 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644213 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89c725c4-90e8-4965-b48d-89f3d2771faf-serving-cert\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644231 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlvsc\" (UniqueName: \"kubernetes.io/projected/89c725c4-90e8-4965-b48d-89f3d2771faf-kube-api-access-dlvsc\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644247 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b05e9e31-f26d-4358-a644-796cd3fea7a8-etcd-client\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644266 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-console-config\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644282 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/58af825f-df23-4365-bf18-1b2a0c2d143f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vxds9\" (UID: \"58af825f-df23-4365-bf18-1b2a0c2d143f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644305 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78wgf\" (UniqueName: \"kubernetes.io/projected/58af825f-df23-4365-bf18-1b2a0c2d143f-kube-api-access-78wgf\") pod \"ingress-operator-5b745b69d9-vxds9\" (UID: \"58af825f-df23-4365-bf18-1b2a0c2d143f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644329 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcbq5\" (UniqueName: \"kubernetes.io/projected/493c38dc-c859-4715-b97f-be1388ee2162-kube-api-access-tcbq5\") pod \"downloads-7954f5f757-dpsrp\" (UID: \"493c38dc-c859-4715-b97f-be1388ee2162\") " pod="openshift-console/downloads-7954f5f757-dpsrp" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644354 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644366 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644378 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b05e9e31-f26d-4358-a644-796cd3fea7a8-node-pullsecrets\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644399 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89c725c4-90e8-4965-b48d-89f3d2771faf-etcd-client\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644421 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/670d8b6b-95a2-4711-98db-3f71e295093b-config\") pod \"machine-api-operator-5694c8668f-clff8\" (UID: \"670d8b6b-95a2-4711-98db-3f71e295093b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644442 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pl9c\" (UniqueName: \"kubernetes.io/projected/670d8b6b-95a2-4711-98db-3f71e295093b-kube-api-access-9pl9c\") pod \"machine-api-operator-5694c8668f-clff8\" (UID: \"670d8b6b-95a2-4711-98db-3f71e295093b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644464 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a39985a-ab91-430a-be02-8f2ac1399a37-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zhjsx\" (UID: \"4a39985a-ab91-430a-be02-8f2ac1399a37\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhjsx" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644487 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55adfb11-256b-4dd4-ba09-00ffd68f6e5e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qf4jk\" (UID: \"55adfb11-256b-4dd4-ba09-00ffd68f6e5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qf4jk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644518 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-service-ca\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644539 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644563 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8trc\" (UniqueName: \"kubernetes.io/projected/4c861742-2395-4de1-9cc3-1d8328741cbb-kube-api-access-n8trc\") pod \"multus-admission-controller-857f4d67dd-hrflq\" (UID: \"4c861742-2395-4de1-9cc3-1d8328741cbb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hrflq" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644588 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8trwt\" (UniqueName: \"kubernetes.io/projected/6a00c17d-c0fe-49a3-921a-2c19dcea3274-kube-api-access-8trwt\") pod \"openshift-config-operator-7777fb866f-9rzwn\" (UID: \"6a00c17d-c0fe-49a3-921a-2c19dcea3274\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644608 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89c725c4-90e8-4965-b48d-89f3d2771faf-encryption-config\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644634 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/670d8b6b-95a2-4711-98db-3f71e295093b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-clff8\" (UID: \"670d8b6b-95a2-4711-98db-3f71e295093b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644658 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b10a171e-2958-45c1-9a6d-c8c14a7a24ae-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qvjjk\" (UID: \"b10a171e-2958-45c1-9a6d-c8c14a7a24ae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qvjjk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644683 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnlgk\" (UniqueName: \"kubernetes.io/projected/17cbc9af-17b4-4815-b527-9d9d9c5112fc-kube-api-access-hnlgk\") pod \"route-controller-manager-6576b87f9c-vwhlz\" (UID: \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644706 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6a00c17d-c0fe-49a3-921a-2c19dcea3274-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9rzwn\" (UID: \"6a00c17d-c0fe-49a3-921a-2c19dcea3274\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644728 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a39985a-ab91-430a-be02-8f2ac1399a37-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zhjsx\" (UID: \"4a39985a-ab91-430a-be02-8f2ac1399a37\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhjsx" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.644751 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wcr9\" (UniqueName: \"kubernetes.io/projected/7a44818f-a388-4dcb-93f4-b781c1f7bf16-kube-api-access-7wcr9\") pod \"dns-operator-744455d44c-qj9jg\" (UID: \"7a44818f-a388-4dcb-93f4-b781c1f7bf16\") " pod="openshift-dns-operator/dns-operator-744455d44c-qj9jg" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645243 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e262a14-a507-44b4-8634-5f4854181f02-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-bnbjg\" (UID: \"8e262a14-a507-44b4-8634-5f4854181f02\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645314 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6851779-1393-4518-be8b-519296708bd7-console-serving-cert\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645348 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55adfb11-256b-4dd4-ba09-00ffd68f6e5e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qf4jk\" (UID: \"55adfb11-256b-4dd4-ba09-00ffd68f6e5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qf4jk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645366 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645376 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645438 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4gwz\" (UniqueName: \"kubernetes.io/projected/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-kube-api-access-b4gwz\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645461 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e262a14-a507-44b4-8634-5f4854181f02-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bnbjg\" (UID: \"8e262a14-a507-44b4-8634-5f4854181f02\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645480 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645500 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89c725c4-90e8-4965-b48d-89f3d2771faf-audit-dir\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645520 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/58af825f-df23-4365-bf18-1b2a0c2d143f-metrics-tls\") pod \"ingress-operator-5b745b69d9-vxds9\" (UID: \"58af825f-df23-4365-bf18-1b2a0c2d143f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645537 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-trusted-ca-bundle\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645553 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0347f639-0210-4f2c-99de-915830c86a6d-trusted-ca\") pod \"console-operator-58897d9998-gk8wd\" (UID: \"0347f639-0210-4f2c-99de-915830c86a6d\") " pod="openshift-console-operator/console-operator-58897d9998-gk8wd" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645571 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqhn6\" (UniqueName: \"kubernetes.io/projected/121d9a3b-d369-4245-84ec-3efeb902ccd8-kube-api-access-nqhn6\") pod \"openshift-controller-manager-operator-756b6f6bc6-thwpk\" (UID: \"121d9a3b-d369-4245-84ec-3efeb902ccd8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thwpk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645595 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs6jp\" (UniqueName: \"kubernetes.io/projected/b05e9e31-f26d-4358-a644-796cd3fea7a8-kube-api-access-rs6jp\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645611 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qckk9\" (UniqueName: \"kubernetes.io/projected/55adfb11-256b-4dd4-ba09-00ffd68f6e5e-kube-api-access-qckk9\") pod \"openshift-apiserver-operator-796bbdcf4f-qf4jk\" (UID: \"55adfb11-256b-4dd4-ba09-00ffd68f6e5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qf4jk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645630 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcjz4\" (UniqueName: \"kubernetes.io/projected/4ab28893-3f63-4c8a-a023-e0447c39a817-kube-api-access-dcjz4\") pod \"olm-operator-6b444d44fb-9vjcv\" (UID: \"4ab28893-3f63-4c8a-a023-e0447c39a817\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645650 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1718eaa3-1d2b-46a0-b43d-e6408e75d53a-certs\") pod \"machine-config-server-d2ml5\" (UID: \"1718eaa3-1d2b-46a0-b43d-e6408e75d53a\") " pod="openshift-machine-config-operator/machine-config-server-d2ml5" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645667 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b05e9e31-f26d-4358-a644-796cd3fea7a8-audit-dir\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645684 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a00c17d-c0fe-49a3-921a-2c19dcea3274-serving-cert\") pod \"openshift-config-operator-7777fb866f-9rzwn\" (UID: \"6a00c17d-c0fe-49a3-921a-2c19dcea3274\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645688 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-service-ca\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645705 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/121d9a3b-d369-4245-84ec-3efeb902ccd8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-thwpk\" (UID: \"121d9a3b-d369-4245-84ec-3efeb902ccd8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thwpk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645765 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645783 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1718eaa3-1d2b-46a0-b43d-e6408e75d53a-node-bootstrap-token\") pod \"machine-config-server-d2ml5\" (UID: \"1718eaa3-1d2b-46a0-b43d-e6408e75d53a\") " pod="openshift-machine-config-operator/machine-config-server-d2ml5" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645802 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcmjp\" (UniqueName: \"kubernetes.io/projected/01c8f5c5-8c83-43b2-9070-6b138b246718-kube-api-access-vcmjp\") pod \"cluster-samples-operator-665b6dd947-dwt4c\" (UID: \"01c8f5c5-8c83-43b2-9070-6b138b246718\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dwt4c" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645817 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4ab28893-3f63-4c8a-a023-e0447c39a817-srv-cert\") pod \"olm-operator-6b444d44fb-9vjcv\" (UID: \"4ab28893-3f63-4c8a-a023-e0447c39a817\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645994 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9jw59\" (UID: \"8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646012 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmvsh\" (UniqueName: \"kubernetes.io/projected/8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c-kube-api-access-wmvsh\") pod \"package-server-manager-789f6589d5-9jw59\" (UID: \"8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646032 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b05e9e31-f26d-4358-a644-796cd3fea7a8-etcd-serving-ca\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646049 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-audit-policies\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646064 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/121d9a3b-d369-4245-84ec-3efeb902ccd8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-thwpk\" (UID: \"121d9a3b-d369-4245-84ec-3efeb902ccd8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thwpk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646083 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c861742-2395-4de1-9cc3-1d8328741cbb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hrflq\" (UID: \"4c861742-2395-4de1-9cc3-1d8328741cbb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hrflq" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646110 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646126 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646152 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05e9e31-f26d-4358-a644-796cd3fea7a8-config\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646170 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/01c8f5c5-8c83-43b2-9070-6b138b246718-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dwt4c\" (UID: \"01c8f5c5-8c83-43b2-9070-6b138b246718\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dwt4c" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646333 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0347f639-0210-4f2c-99de-915830c86a6d-config\") pod \"console-operator-58897d9998-gk8wd\" (UID: \"0347f639-0210-4f2c-99de-915830c86a6d\") " pod="openshift-console-operator/console-operator-58897d9998-gk8wd" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646352 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0347f639-0210-4f2c-99de-915830c86a6d-serving-cert\") pod \"console-operator-58897d9998-gk8wd\" (UID: \"0347f639-0210-4f2c-99de-915830c86a6d\") " pod="openshift-console-operator/console-operator-58897d9998-gk8wd" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646368 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4tvf\" (UniqueName: \"kubernetes.io/projected/0347f639-0210-4f2c-99de-915830c86a6d-kube-api-access-t4tvf\") pod \"console-operator-58897d9998-gk8wd\" (UID: \"0347f639-0210-4f2c-99de-915830c86a6d\") " pod="openshift-console-operator/console-operator-58897d9998-gk8wd" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646386 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b10a171e-2958-45c1-9a6d-c8c14a7a24ae-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qvjjk\" (UID: \"b10a171e-2958-45c1-9a6d-c8c14a7a24ae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qvjjk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646406 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e262a14-a507-44b4-8634-5f4854181f02-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bnbjg\" (UID: \"8e262a14-a507-44b4-8634-5f4854181f02\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646422 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89c725c4-90e8-4965-b48d-89f3d2771faf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646438 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvsmq\" (UniqueName: \"kubernetes.io/projected/1718eaa3-1d2b-46a0-b43d-e6408e75d53a-kube-api-access-tvsmq\") pod \"machine-config-server-d2ml5\" (UID: \"1718eaa3-1d2b-46a0-b43d-e6408e75d53a\") " pod="openshift-machine-config-operator/machine-config-server-d2ml5" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646460 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b05e9e31-f26d-4358-a644-796cd3fea7a8-image-import-ca\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646484 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b05e9e31-f26d-4358-a644-796cd3fea7a8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646513 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646536 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4ab28893-3f63-4c8a-a023-e0447c39a817-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9vjcv\" (UID: \"4ab28893-3f63-4c8a-a023-e0447c39a817\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646556 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646561 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b05e9e31-f26d-4358-a644-796cd3fea7a8-audit\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646589 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646616 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/670d8b6b-95a2-4711-98db-3f71e295093b-images\") pod \"machine-api-operator-5694c8668f-clff8\" (UID: \"670d8b6b-95a2-4711-98db-3f71e295093b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646659 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b05e9e31-f26d-4358-a644-796cd3fea7a8-node-pullsecrets\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.646972 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.647017 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89c725c4-90e8-4965-b48d-89f3d2771faf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.647052 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17cbc9af-17b4-4815-b527-9d9d9c5112fc-client-ca\") pod \"route-controller-manager-6576b87f9c-vwhlz\" (UID: \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.647226 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b05e9e31-f26d-4358-a644-796cd3fea7a8-serving-cert\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.647582 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b05e9e31-f26d-4358-a644-796cd3fea7a8-etcd-client\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.648057 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17cbc9af-17b4-4815-b527-9d9d9c5112fc-client-ca\") pod \"route-controller-manager-6576b87f9c-vwhlz\" (UID: \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.648138 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17cbc9af-17b4-4815-b527-9d9d9c5112fc-config\") pod \"route-controller-manager-6576b87f9c-vwhlz\" (UID: \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.645344 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/89c725c4-90e8-4965-b48d-89f3d2771faf-audit-policies\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.648314 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.648313 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7a44818f-a388-4dcb-93f4-b781c1f7bf16-metrics-tls\") pod \"dns-operator-744455d44c-qj9jg\" (UID: \"7a44818f-a388-4dcb-93f4-b781c1f7bf16\") " pod="openshift-dns-operator/dns-operator-744455d44c-qj9jg" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.648614 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-audit-policies\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.649054 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89c725c4-90e8-4965-b48d-89f3d2771faf-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.649299 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-console-config\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.649398 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b05e9e31-f26d-4358-a644-796cd3fea7a8-audit-dir\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.649655 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.649863 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17cbc9af-17b4-4815-b527-9d9d9c5112fc-serving-cert\") pod \"route-controller-manager-6576b87f9c-vwhlz\" (UID: \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.649881 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.649901 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jcj5k"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.650157 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89c725c4-90e8-4965-b48d-89f3d2771faf-serving-cert\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.650366 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-849z4"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.651121 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-849z4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.651248 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b05e9e31-f26d-4358-a644-796cd3fea7a8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.651402 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-trusted-ca-bundle\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.652046 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0347f639-0210-4f2c-99de-915830c86a6d-config\") pod \"console-operator-58897d9998-gk8wd\" (UID: \"0347f639-0210-4f2c-99de-915830c86a6d\") " pod="openshift-console-operator/console-operator-58897d9998-gk8wd" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.652078 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/121d9a3b-d369-4245-84ec-3efeb902ccd8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-thwpk\" (UID: \"121d9a3b-d369-4245-84ec-3efeb902ccd8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thwpk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.652172 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0347f639-0210-4f2c-99de-915830c86a6d-trusted-ca\") pod \"console-operator-58897d9998-gk8wd\" (UID: \"0347f639-0210-4f2c-99de-915830c86a6d\") " pod="openshift-console-operator/console-operator-58897d9998-gk8wd" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.652675 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.652701 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e262a14-a507-44b4-8634-5f4854181f02-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-bnbjg\" (UID: \"8e262a14-a507-44b4-8634-5f4854181f02\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.652714 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b10a171e-2958-45c1-9a6d-c8c14a7a24ae-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qvjjk\" (UID: \"b10a171e-2958-45c1-9a6d-c8c14a7a24ae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qvjjk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.652754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89c725c4-90e8-4965-b48d-89f3d2771faf-audit-dir\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.652882 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-849z4"] Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.653258 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b05e9e31-f26d-4358-a644-796cd3fea7a8-image-import-ca\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.653325 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6851779-1393-4518-be8b-519296708bd7-console-serving-cert\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.653605 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b05e9e31-f26d-4358-a644-796cd3fea7a8-encryption-config\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.653784 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.654211 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89c725c4-90e8-4965-b48d-89f3d2771faf-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.654359 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b05e9e31-f26d-4358-a644-796cd3fea7a8-audit\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.654502 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6a00c17d-c0fe-49a3-921a-2c19dcea3274-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9rzwn\" (UID: \"6a00c17d-c0fe-49a3-921a-2c19dcea3274\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.654535 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05e9e31-f26d-4358-a644-796cd3fea7a8-config\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.654786 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a39985a-ab91-430a-be02-8f2ac1399a37-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zhjsx\" (UID: \"4a39985a-ab91-430a-be02-8f2ac1399a37\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhjsx" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.655002 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/670d8b6b-95a2-4711-98db-3f71e295093b-config\") pod \"machine-api-operator-5694c8668f-clff8\" (UID: \"670d8b6b-95a2-4711-98db-3f71e295093b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.655108 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/670d8b6b-95a2-4711-98db-3f71e295093b-images\") pod \"machine-api-operator-5694c8668f-clff8\" (UID: \"670d8b6b-95a2-4711-98db-3f71e295093b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.655388 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55adfb11-256b-4dd4-ba09-00ffd68f6e5e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qf4jk\" (UID: \"55adfb11-256b-4dd4-ba09-00ffd68f6e5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qf4jk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.655967 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.656170 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b05e9e31-f26d-4358-a644-796cd3fea7a8-etcd-serving-ca\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.656497 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.657102 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.657646 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.657684 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/670d8b6b-95a2-4711-98db-3f71e295093b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-clff8\" (UID: \"670d8b6b-95a2-4711-98db-3f71e295093b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.657689 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b10a171e-2958-45c1-9a6d-c8c14a7a24ae-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qvjjk\" (UID: \"b10a171e-2958-45c1-9a6d-c8c14a7a24ae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qvjjk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.658011 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55adfb11-256b-4dd4-ba09-00ffd68f6e5e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qf4jk\" (UID: \"55adfb11-256b-4dd4-ba09-00ffd68f6e5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qf4jk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.658981 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89c725c4-90e8-4965-b48d-89f3d2771faf-etcd-client\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.659377 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89c725c4-90e8-4965-b48d-89f3d2771faf-encryption-config\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.659697 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/121d9a3b-d369-4245-84ec-3efeb902ccd8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-thwpk\" (UID: \"121d9a3b-d369-4245-84ec-3efeb902ccd8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thwpk" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.660079 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b6851779-1393-4518-be8b-519296708bd7-console-oauth-config\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.661416 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.661497 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/01c8f5c5-8c83-43b2-9070-6b138b246718-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dwt4c\" (UID: \"01c8f5c5-8c83-43b2-9070-6b138b246718\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dwt4c" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.662043 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0347f639-0210-4f2c-99de-915830c86a6d-serving-cert\") pod \"console-operator-58897d9998-gk8wd\" (UID: \"0347f639-0210-4f2c-99de-915830c86a6d\") " pod="openshift-console-operator/console-operator-58897d9998-gk8wd" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.662592 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a00c17d-c0fe-49a3-921a-2c19dcea3274-serving-cert\") pod \"openshift-config-operator-7777fb866f-9rzwn\" (UID: \"6a00c17d-c0fe-49a3-921a-2c19dcea3274\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.667765 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.687403 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.706317 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.727252 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.746487 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.748289 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8trc\" (UniqueName: \"kubernetes.io/projected/4c861742-2395-4de1-9cc3-1d8328741cbb-kube-api-access-n8trc\") pod \"multus-admission-controller-857f4d67dd-hrflq\" (UID: \"4c861742-2395-4de1-9cc3-1d8328741cbb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hrflq" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.748365 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/58af825f-df23-4365-bf18-1b2a0c2d143f-metrics-tls\") pod \"ingress-operator-5b745b69d9-vxds9\" (UID: \"58af825f-df23-4365-bf18-1b2a0c2d143f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.748398 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcjz4\" (UniqueName: \"kubernetes.io/projected/4ab28893-3f63-4c8a-a023-e0447c39a817-kube-api-access-dcjz4\") pod \"olm-operator-6b444d44fb-9vjcv\" (UID: \"4ab28893-3f63-4c8a-a023-e0447c39a817\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.748429 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1718eaa3-1d2b-46a0-b43d-e6408e75d53a-certs\") pod \"machine-config-server-d2ml5\" (UID: \"1718eaa3-1d2b-46a0-b43d-e6408e75d53a\") " pod="openshift-machine-config-operator/machine-config-server-d2ml5" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.748464 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1718eaa3-1d2b-46a0-b43d-e6408e75d53a-node-bootstrap-token\") pod \"machine-config-server-d2ml5\" (UID: \"1718eaa3-1d2b-46a0-b43d-e6408e75d53a\") " pod="openshift-machine-config-operator/machine-config-server-d2ml5" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.748495 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4ab28893-3f63-4c8a-a023-e0447c39a817-srv-cert\") pod \"olm-operator-6b444d44fb-9vjcv\" (UID: \"4ab28893-3f63-4c8a-a023-e0447c39a817\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.748520 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9jw59\" (UID: \"8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.748547 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmvsh\" (UniqueName: \"kubernetes.io/projected/8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c-kube-api-access-wmvsh\") pod \"package-server-manager-789f6589d5-9jw59\" (UID: \"8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.748568 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c861742-2395-4de1-9cc3-1d8328741cbb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hrflq\" (UID: \"4c861742-2395-4de1-9cc3-1d8328741cbb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hrflq" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.748629 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvsmq\" (UniqueName: \"kubernetes.io/projected/1718eaa3-1d2b-46a0-b43d-e6408e75d53a-kube-api-access-tvsmq\") pod \"machine-config-server-d2ml5\" (UID: \"1718eaa3-1d2b-46a0-b43d-e6408e75d53a\") " pod="openshift-machine-config-operator/machine-config-server-d2ml5" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.748654 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4ab28893-3f63-4c8a-a023-e0447c39a817-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9vjcv\" (UID: \"4ab28893-3f63-4c8a-a023-e0447c39a817\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.748685 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wncjw\" (UniqueName: \"kubernetes.io/projected/8b8af0be-d73b-4b8e-b7a2-295834553924-kube-api-access-wncjw\") pod \"migrator-59844c95c7-6f8sx\" (UID: \"8b8af0be-d73b-4b8e-b7a2-295834553924\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6f8sx" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.748700 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58af825f-df23-4365-bf18-1b2a0c2d143f-trusted-ca\") pod \"ingress-operator-5b745b69d9-vxds9\" (UID: \"58af825f-df23-4365-bf18-1b2a0c2d143f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.748756 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/58af825f-df23-4365-bf18-1b2a0c2d143f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vxds9\" (UID: \"58af825f-df23-4365-bf18-1b2a0c2d143f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.748781 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78wgf\" (UniqueName: \"kubernetes.io/projected/58af825f-df23-4365-bf18-1b2a0c2d143f-kube-api-access-78wgf\") pod \"ingress-operator-5b745b69d9-vxds9\" (UID: \"58af825f-df23-4365-bf18-1b2a0c2d143f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.766932 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.786665 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.806583 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.827365 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.847089 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.866809 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.887097 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.907111 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.927560 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.946967 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.967382 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 09:56:11 crc kubenswrapper[4869]: I0127 09:56:11.987482 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.006648 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.027787 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.031783 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4ab28893-3f63-4c8a-a023-e0447c39a817-srv-cert\") pod \"olm-operator-6b444d44fb-9vjcv\" (UID: \"4ab28893-3f63-4c8a-a023-e0447c39a817\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.046986 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.051981 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4ab28893-3f63-4c8a-a023-e0447c39a817-profile-collector-cert\") pod \"olm-operator-6b444d44fb-9vjcv\" (UID: \"4ab28893-3f63-4c8a-a023-e0447c39a817\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.066793 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.087566 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.107402 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.126706 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.147426 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.167414 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.187758 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.208048 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.244463 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9jnc\" (UniqueName: \"kubernetes.io/projected/6962f915-2dbf-4aa0-8e97-79ccb1dc35de-kube-api-access-l9jnc\") pod \"machine-approver-56656f9798-v5prp\" (UID: \"6962f915-2dbf-4aa0-8e97-79ccb1dc35de\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.247687 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.268418 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.287733 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.295023 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.309087 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 09:56:12 crc kubenswrapper[4869]: W0127 09:56:12.318894 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6962f915_2dbf_4aa0_8e97_79ccb1dc35de.slice/crio-c2cb736dba3c72289d42de1a49676850aeeefb3e785ea7b4abd96cf2e9c8bff4 WatchSource:0}: Error finding container c2cb736dba3c72289d42de1a49676850aeeefb3e785ea7b4abd96cf2e9c8bff4: Status 404 returned error can't find the container with id c2cb736dba3c72289d42de1a49676850aeeefb3e785ea7b4abd96cf2e9c8bff4 Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.327657 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.387756 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.390528 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7g4f\" (UniqueName: \"kubernetes.io/projected/6951bfc9-9908-4404-9000-cc243c35a314-kube-api-access-f7g4f\") pod \"controller-manager-879f6c89f-4sqz8\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.408272 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.414474 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/58af825f-df23-4365-bf18-1b2a0c2d143f-metrics-tls\") pod \"ingress-operator-5b745b69d9-vxds9\" (UID: \"58af825f-df23-4365-bf18-1b2a0c2d143f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.428218 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.454095 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.460587 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58af825f-df23-4365-bf18-1b2a0c2d143f-trusted-ca\") pod \"ingress-operator-5b745b69d9-vxds9\" (UID: \"58af825f-df23-4365-bf18-1b2a0c2d143f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.468124 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.487910 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.507362 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.512632 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c861742-2395-4de1-9cc3-1d8328741cbb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hrflq\" (UID: \"4c861742-2395-4de1-9cc3-1d8328741cbb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hrflq" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.527575 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.548415 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.565660 4869 request.go:700] Waited for 1.009656838s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.567610 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.587552 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.607458 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.627993 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.646420 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.647129 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.668400 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.688622 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.707469 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.731277 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.746799 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 09:56:12 crc kubenswrapper[4869]: E0127 09:56:12.748552 4869 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 27 09:56:12 crc kubenswrapper[4869]: E0127 09:56:12.748611 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1718eaa3-1d2b-46a0-b43d-e6408e75d53a-certs podName:1718eaa3-1d2b-46a0-b43d-e6408e75d53a nodeName:}" failed. No retries permitted until 2026-01-27 09:56:13.248591079 +0000 UTC m=+141.869015162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/1718eaa3-1d2b-46a0-b43d-e6408e75d53a-certs") pod "machine-config-server-d2ml5" (UID: "1718eaa3-1d2b-46a0-b43d-e6408e75d53a") : failed to sync secret cache: timed out waiting for the condition Jan 27 09:56:12 crc kubenswrapper[4869]: E0127 09:56:12.748645 4869 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 27 09:56:12 crc kubenswrapper[4869]: E0127 09:56:12.748658 4869 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 09:56:12 crc kubenswrapper[4869]: E0127 09:56:12.748675 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1718eaa3-1d2b-46a0-b43d-e6408e75d53a-node-bootstrap-token podName:1718eaa3-1d2b-46a0-b43d-e6408e75d53a nodeName:}" failed. No retries permitted until 2026-01-27 09:56:13.248666223 +0000 UTC m=+141.869090306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/1718eaa3-1d2b-46a0-b43d-e6408e75d53a-node-bootstrap-token") pod "machine-config-server-d2ml5" (UID: "1718eaa3-1d2b-46a0-b43d-e6408e75d53a") : failed to sync secret cache: timed out waiting for the condition Jan 27 09:56:12 crc kubenswrapper[4869]: E0127 09:56:12.748757 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c-package-server-manager-serving-cert podName:8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c nodeName:}" failed. No retries permitted until 2026-01-27 09:56:13.248735156 +0000 UTC m=+141.869159309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-9jw59" (UID: "8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c") : failed to sync secret cache: timed out waiting for the condition Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.766715 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.768041 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" event={"ID":"6962f915-2dbf-4aa0-8e97-79ccb1dc35de","Type":"ContainerStarted","Data":"c38c53d86255e8eedc0781523c121130c6f617a5fb501517345d776911ae5166"} Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.768082 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" event={"ID":"6962f915-2dbf-4aa0-8e97-79ccb1dc35de","Type":"ContainerStarted","Data":"2495f35efa2c4c9735fe61c9a3dc9e1f7e9ff3d029cac69c1953a359d43e3c62"} Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.768093 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" event={"ID":"6962f915-2dbf-4aa0-8e97-79ccb1dc35de","Type":"ContainerStarted","Data":"c2cb736dba3c72289d42de1a49676850aeeefb3e785ea7b4abd96cf2e9c8bff4"} Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.787522 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.807143 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.828947 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.835284 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4sqz8"] Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.846863 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.867271 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.887825 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.907282 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.926492 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.947728 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.966752 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 09:56:12 crc kubenswrapper[4869]: I0127 09:56:12.992067 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.007535 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.028257 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.047415 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.087765 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.107734 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.127948 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.147224 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.167116 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.187105 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.207091 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.227338 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.253589 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.267209 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.276245 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9jw59\" (UID: \"8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.276482 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1718eaa3-1d2b-46a0-b43d-e6408e75d53a-certs\") pod \"machine-config-server-d2ml5\" (UID: \"1718eaa3-1d2b-46a0-b43d-e6408e75d53a\") " pod="openshift-machine-config-operator/machine-config-server-d2ml5" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.276518 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1718eaa3-1d2b-46a0-b43d-e6408e75d53a-node-bootstrap-token\") pod \"machine-config-server-d2ml5\" (UID: \"1718eaa3-1d2b-46a0-b43d-e6408e75d53a\") " pod="openshift-machine-config-operator/machine-config-server-d2ml5" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.281171 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9jw59\" (UID: \"8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.281169 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1718eaa3-1d2b-46a0-b43d-e6408e75d53a-certs\") pod \"machine-config-server-d2ml5\" (UID: \"1718eaa3-1d2b-46a0-b43d-e6408e75d53a\") " pod="openshift-machine-config-operator/machine-config-server-d2ml5" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.281613 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1718eaa3-1d2b-46a0-b43d-e6408e75d53a-node-bootstrap-token\") pod \"machine-config-server-d2ml5\" (UID: \"1718eaa3-1d2b-46a0-b43d-e6408e75d53a\") " pod="openshift-machine-config-operator/machine-config-server-d2ml5" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.287608 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.307077 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.327373 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.347559 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.368140 4869 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.386964 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.406678 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.441806 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b10a171e-2958-45c1-9a6d-c8c14a7a24ae-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-qvjjk\" (UID: \"b10a171e-2958-45c1-9a6d-c8c14a7a24ae\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qvjjk" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.460703 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tknxj\" (UniqueName: \"kubernetes.io/projected/b6851779-1393-4518-be8b-519296708bd7-kube-api-access-tknxj\") pod \"console-f9d7485db-q86c4\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.478627 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlvsc\" (UniqueName: \"kubernetes.io/projected/89c725c4-90e8-4965-b48d-89f3d2771faf-kube-api-access-dlvsc\") pod \"apiserver-7bbb656c7d-r7z5l\" (UID: \"89c725c4-90e8-4965-b48d-89f3d2771faf\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.500450 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks7jn\" (UniqueName: \"kubernetes.io/projected/8e262a14-a507-44b4-8634-5f4854181f02-kube-api-access-ks7jn\") pod \"cluster-image-registry-operator-dc59b4c8b-bnbjg\" (UID: \"8e262a14-a507-44b4-8634-5f4854181f02\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.504129 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.519068 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qvjjk" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.519095 4869 csr.go:261] certificate signing request csr-b4mzv is approved, waiting to be issued Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.521952 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcbq5\" (UniqueName: \"kubernetes.io/projected/493c38dc-c859-4715-b97f-be1388ee2162-kube-api-access-tcbq5\") pod \"downloads-7954f5f757-dpsrp\" (UID: \"493c38dc-c859-4715-b97f-be1388ee2162\") " pod="openshift-console/downloads-7954f5f757-dpsrp" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.524933 4869 csr.go:257] certificate signing request csr-b4mzv is issued Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.546291 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4gwz\" (UniqueName: \"kubernetes.io/projected/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-kube-api-access-b4gwz\") pod \"oauth-openshift-558db77b4-rnv4g\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.556934 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.564044 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8trwt\" (UniqueName: \"kubernetes.io/projected/6a00c17d-c0fe-49a3-921a-2c19dcea3274-kube-api-access-8trwt\") pod \"openshift-config-operator-7777fb866f-9rzwn\" (UID: \"6a00c17d-c0fe-49a3-921a-2c19dcea3274\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.566027 4869 request.go:700] Waited for 1.916613006s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.584912 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wcr9\" (UniqueName: \"kubernetes.io/projected/7a44818f-a388-4dcb-93f4-b781c1f7bf16-kube-api-access-7wcr9\") pod \"dns-operator-744455d44c-qj9jg\" (UID: \"7a44818f-a388-4dcb-93f4-b781c1f7bf16\") " pod="openshift-dns-operator/dns-operator-744455d44c-qj9jg" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.606766 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnlgk\" (UniqueName: \"kubernetes.io/projected/17cbc9af-17b4-4815-b527-9d9d9c5112fc-kube-api-access-hnlgk\") pod \"route-controller-manager-6576b87f9c-vwhlz\" (UID: \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.627493 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqhn6\" (UniqueName: \"kubernetes.io/projected/121d9a3b-d369-4245-84ec-3efeb902ccd8-kube-api-access-nqhn6\") pod \"openshift-controller-manager-operator-756b6f6bc6-thwpk\" (UID: \"121d9a3b-d369-4245-84ec-3efeb902ccd8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thwpk" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.641480 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.655172 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4tvf\" (UniqueName: \"kubernetes.io/projected/0347f639-0210-4f2c-99de-915830c86a6d-kube-api-access-t4tvf\") pod \"console-operator-58897d9998-gk8wd\" (UID: \"0347f639-0210-4f2c-99de-915830c86a6d\") " pod="openshift-console-operator/console-operator-58897d9998-gk8wd" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.661626 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs6jp\" (UniqueName: \"kubernetes.io/projected/b05e9e31-f26d-4358-a644-796cd3fea7a8-kube-api-access-rs6jp\") pod \"apiserver-76f77b778f-w8hng\" (UID: \"b05e9e31-f26d-4358-a644-796cd3fea7a8\") " pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.667093 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.688437 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.704950 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thwpk" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.707428 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.711544 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qj9jg" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.723084 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-q86c4"] Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.725738 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.727103 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.743187 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qvjjk"] Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.746070 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.763000 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8e262a14-a507-44b4-8634-5f4854181f02-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-bnbjg\" (UID: \"8e262a14-a507-44b4-8634-5f4854181f02\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg" Jan 27 09:56:13 crc kubenswrapper[4869]: W0127 09:56:13.763627 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb10a171e_2958_45c1_9a6d_c8c14a7a24ae.slice/crio-d1de59e4a0269407f2978d398ba323356e2811f71e0ae838e4a6127ea0fe3651 WatchSource:0}: Error finding container d1de59e4a0269407f2978d398ba323356e2811f71e0ae838e4a6127ea0fe3651: Status 404 returned error can't find the container with id d1de59e4a0269407f2978d398ba323356e2811f71e0ae838e4a6127ea0fe3651 Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.788793 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qckk9\" (UniqueName: \"kubernetes.io/projected/55adfb11-256b-4dd4-ba09-00ffd68f6e5e-kube-api-access-qckk9\") pod \"openshift-apiserver-operator-796bbdcf4f-qf4jk\" (UID: \"55adfb11-256b-4dd4-ba09-00ffd68f6e5e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qf4jk" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.791539 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dpsrp" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.795860 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gk8wd" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.800434 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qvjjk" event={"ID":"b10a171e-2958-45c1-9a6d-c8c14a7a24ae","Type":"ContainerStarted","Data":"d1de59e4a0269407f2978d398ba323356e2811f71e0ae838e4a6127ea0fe3651"} Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.806796 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q86c4" event={"ID":"b6851779-1393-4518-be8b-519296708bd7","Type":"ContainerStarted","Data":"5f23d6383ba9588cc57faf791153faa0f85b811c41a3606c51411120054c2450"} Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.821479 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" event={"ID":"6951bfc9-9908-4404-9000-cc243c35a314","Type":"ContainerStarted","Data":"9942e9717e18a890f3a560a96f2d6d4a4b791a8c0a5bdfacd39ada94554f8c12"} Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.821517 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" event={"ID":"6951bfc9-9908-4404-9000-cc243c35a314","Type":"ContainerStarted","Data":"333f9d4ced60af2986cdda9275136ff7c26d112571d600b03571ed75bc80bb4e"} Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.822322 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.826109 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcmjp\" (UniqueName: \"kubernetes.io/projected/01c8f5c5-8c83-43b2-9070-6b138b246718-kube-api-access-vcmjp\") pod \"cluster-samples-operator-665b6dd947-dwt4c\" (UID: \"01c8f5c5-8c83-43b2-9070-6b138b246718\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dwt4c" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.827076 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a39985a-ab91-430a-be02-8f2ac1399a37-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zhjsx\" (UID: \"4a39985a-ab91-430a-be02-8f2ac1399a37\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhjsx" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.830622 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.843899 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pl9c\" (UniqueName: \"kubernetes.io/projected/670d8b6b-95a2-4711-98db-3f71e295093b-kube-api-access-9pl9c\") pod \"machine-api-operator-5694c8668f-clff8\" (UID: \"670d8b6b-95a2-4711-98db-3f71e295093b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.872306 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.872607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8trc\" (UniqueName: \"kubernetes.io/projected/4c861742-2395-4de1-9cc3-1d8328741cbb-kube-api-access-n8trc\") pod \"multus-admission-controller-857f4d67dd-hrflq\" (UID: \"4c861742-2395-4de1-9cc3-1d8328741cbb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hrflq" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.885841 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz"] Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.898633 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcjz4\" (UniqueName: \"kubernetes.io/projected/4ab28893-3f63-4c8a-a023-e0447c39a817-kube-api-access-dcjz4\") pod \"olm-operator-6b444d44fb-9vjcv\" (UID: \"4ab28893-3f63-4c8a-a023-e0447c39a817\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.901663 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.905861 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hrflq" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.919611 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmvsh\" (UniqueName: \"kubernetes.io/projected/8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c-kube-api-access-wmvsh\") pod \"package-server-manager-789f6589d5-9jw59\" (UID: \"8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.926382 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvsmq\" (UniqueName: \"kubernetes.io/projected/1718eaa3-1d2b-46a0-b43d-e6408e75d53a-kube-api-access-tvsmq\") pod \"machine-config-server-d2ml5\" (UID: \"1718eaa3-1d2b-46a0-b43d-e6408e75d53a\") " pod="openshift-machine-config-operator/machine-config-server-d2ml5" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.935271 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.941683 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wncjw\" (UniqueName: \"kubernetes.io/projected/8b8af0be-d73b-4b8e-b7a2-295834553924-kube-api-access-wncjw\") pod \"migrator-59844c95c7-6f8sx\" (UID: \"8b8af0be-d73b-4b8e-b7a2-295834553924\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6f8sx" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.956259 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-d2ml5" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.964950 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/58af825f-df23-4365-bf18-1b2a0c2d143f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vxds9\" (UID: \"58af825f-df23-4365-bf18-1b2a0c2d143f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" Jan 27 09:56:13 crc kubenswrapper[4869]: I0127 09:56:13.980436 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78wgf\" (UniqueName: \"kubernetes.io/projected/58af825f-df23-4365-bf18-1b2a0c2d143f-kube-api-access-78wgf\") pod \"ingress-operator-5b745b69d9-vxds9\" (UID: \"58af825f-df23-4365-bf18-1b2a0c2d143f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:13.996163 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qj9jg"] Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.001325 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qf4jk" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.019077 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dwt4c" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.036989 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg" Jan 27 09:56:14 crc kubenswrapper[4869]: W0127 09:56:14.040171 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1718eaa3_1d2b_46a0_b43d_e6408e75d53a.slice/crio-1ffc9dcee070591fd1aa4cdef75353daf263d02fec1c6af02ca2799f3d1813eb WatchSource:0}: Error finding container 1ffc9dcee070591fd1aa4cdef75353daf263d02fec1c6af02ca2799f3d1813eb: Status 404 returned error can't find the container with id 1ffc9dcee070591fd1aa4cdef75353daf263d02fec1c6af02ca2799f3d1813eb Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.042870 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l"] Jan 27 09:56:14 crc kubenswrapper[4869]: W0127 09:56:14.044793 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a44818f_a388_4dcb_93f4_b781c1f7bf16.slice/crio-14f459d943e2cfdc42f2186b05f325b7713614ce257e1d58201c30d94cba6e36 WatchSource:0}: Error finding container 14f459d943e2cfdc42f2186b05f325b7713614ce257e1d58201c30d94cba6e36: Status 404 returned error can't find the container with id 14f459d943e2cfdc42f2186b05f325b7713614ce257e1d58201c30d94cba6e36 Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.100907 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17abb21b-00f0-41dd-80a3-5d4cb9acc1e6-service-ca-bundle\") pod \"router-default-5444994796-ffwjx\" (UID: \"17abb21b-00f0-41dd-80a3-5d4cb9acc1e6\") " pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.100960 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn5gr\" (UniqueName: \"kubernetes.io/projected/17abb21b-00f0-41dd-80a3-5d4cb9acc1e6-kube-api-access-qn5gr\") pod \"router-default-5444994796-ffwjx\" (UID: \"17abb21b-00f0-41dd-80a3-5d4cb9acc1e6\") " pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.100990 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/61352dfb-6006-4c3f-b404-b32f8a54c08d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101029 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw65n\" (UniqueName: \"kubernetes.io/projected/2ecc898c-2377-4e6f-a02e-028eeca5eec8-kube-api-access-lw65n\") pod \"control-plane-machine-set-operator-78cbb6b69f-j9w4z\" (UID: \"2ecc898c-2377-4e6f-a02e-028eeca5eec8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j9w4z" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101056 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tks29\" (UniqueName: \"kubernetes.io/projected/ebc0fbc2-11a3-48a6-9442-81ffacb1516a-kube-api-access-tks29\") pod \"machine-config-operator-74547568cd-bn4v9\" (UID: \"ebc0fbc2-11a3-48a6-9442-81ffacb1516a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101077 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fcb02167-7185-4960-a665-fca3f7d2c220-etcd-ca\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101109 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c778877d-77ce-493e-a787-d0b76ff13a77-webhook-cert\") pod \"packageserver-d55dfcdfc-rpng5\" (UID: \"c778877d-77ce-493e-a787-d0b76ff13a77\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101184 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsdbj\" (UniqueName: \"kubernetes.io/projected/a574e648-77e2-46a1-a2ad-af18e6e9ad57-kube-api-access-lsdbj\") pod \"marketplace-operator-79b997595-6kntj\" (UID: \"a574e648-77e2-46a1-a2ad-af18e6e9ad57\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101220 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/61352dfb-6006-4c3f-b404-b32f8a54c08d-registry-certificates\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101270 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/17abb21b-00f0-41dd-80a3-5d4cb9acc1e6-default-certificate\") pod \"router-default-5444994796-ffwjx\" (UID: \"17abb21b-00f0-41dd-80a3-5d4cb9acc1e6\") " pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101361 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-registry-tls\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101377 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8df9c601-f464-4501-8418-d4abbbe22f6b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-p5qm5\" (UID: \"8df9c601-f464-4501-8418-d4abbbe22f6b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101394 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbpfb\" (UniqueName: \"kubernetes.io/projected/8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4-kube-api-access-bbpfb\") pod \"service-ca-operator-777779d784-2hgg7\" (UID: \"8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2hgg7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101426 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17abb21b-00f0-41dd-80a3-5d4cb9acc1e6-metrics-certs\") pod \"router-default-5444994796-ffwjx\" (UID: \"17abb21b-00f0-41dd-80a3-5d4cb9acc1e6\") " pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101478 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxw6v\" (UniqueName: \"kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-kube-api-access-dxw6v\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101503 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c778877d-77ce-493e-a787-d0b76ff13a77-tmpfs\") pod \"packageserver-d55dfcdfc-rpng5\" (UID: \"c778877d-77ce-493e-a787-d0b76ff13a77\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101516 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebc0fbc2-11a3-48a6-9442-81ffacb1516a-proxy-tls\") pod \"machine-config-operator-74547568cd-bn4v9\" (UID: \"ebc0fbc2-11a3-48a6-9442-81ffacb1516a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101536 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a574e648-77e2-46a1-a2ad-af18e6e9ad57-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6kntj\" (UID: \"a574e648-77e2-46a1-a2ad-af18e6e9ad57\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101580 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c508e45a-d6fc-419c-960b-7603bf3209b2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ktdt7\" (UID: \"c508e45a-d6fc-419c-960b-7603bf3209b2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktdt7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101658 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51446a4f-e443-47ee-9ca8-a67fdaf62a7e-signing-cabundle\") pod \"service-ca-9c57cc56f-frmph\" (UID: \"51446a4f-e443-47ee-9ca8-a67fdaf62a7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmph" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101674 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/17abb21b-00f0-41dd-80a3-5d4cb9acc1e6-stats-auth\") pod \"router-default-5444994796-ffwjx\" (UID: \"17abb21b-00f0-41dd-80a3-5d4cb9acc1e6\") " pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101688 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a574e648-77e2-46a1-a2ad-af18e6e9ad57-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6kntj\" (UID: \"a574e648-77e2-46a1-a2ad-af18e6e9ad57\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101702 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ebc0fbc2-11a3-48a6-9442-81ffacb1516a-images\") pod \"machine-config-operator-74547568cd-bn4v9\" (UID: \"ebc0fbc2-11a3-48a6-9442-81ffacb1516a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101716 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c778877d-77ce-493e-a787-d0b76ff13a77-apiservice-cert\") pod \"packageserver-d55dfcdfc-rpng5\" (UID: \"c778877d-77ce-493e-a787-d0b76ff13a77\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101747 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.101763 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fcb02167-7185-4960-a665-fca3f7d2c220-etcd-service-ca\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: E0127 09:56:14.104041 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:14.604026002 +0000 UTC m=+143.224450085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.104104 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4-serving-cert\") pod \"service-ca-operator-777779d784-2hgg7\" (UID: \"8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2hgg7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.104154 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebc0fbc2-11a3-48a6-9442-81ffacb1516a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bn4v9\" (UID: \"ebc0fbc2-11a3-48a6-9442-81ffacb1516a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.104291 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21b1512e-af50-4bdd-8619-5bff9a4ce995-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-smmkq\" (UID: \"21b1512e-af50-4bdd-8619-5bff9a4ce995\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-smmkq" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.104319 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61352dfb-6006-4c3f-b404-b32f8a54c08d-trusted-ca\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.104396 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcb02167-7185-4960-a665-fca3f7d2c220-serving-cert\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.104474 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c508e45a-d6fc-419c-960b-7603bf3209b2-config\") pod \"kube-controller-manager-operator-78b949d7b-ktdt7\" (UID: \"c508e45a-d6fc-419c-960b-7603bf3209b2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktdt7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.107152 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8df9c601-f464-4501-8418-d4abbbe22f6b-proxy-tls\") pod \"machine-config-controller-84d6567774-p5qm5\" (UID: \"8df9c601-f464-4501-8418-d4abbbe22f6b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.107362 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4ww4\" (UniqueName: \"kubernetes.io/projected/21b1512e-af50-4bdd-8619-5bff9a4ce995-kube-api-access-v4ww4\") pod \"kube-storage-version-migrator-operator-b67b599dd-smmkq\" (UID: \"21b1512e-af50-4bdd-8619-5bff9a4ce995\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-smmkq" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.107490 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4-config\") pod \"service-ca-operator-777779d784-2hgg7\" (UID: \"8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2hgg7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.107526 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc8wr\" (UniqueName: \"kubernetes.io/projected/c778877d-77ce-493e-a787-d0b76ff13a77-kube-api-access-qc8wr\") pod \"packageserver-d55dfcdfc-rpng5\" (UID: \"c778877d-77ce-493e-a787-d0b76ff13a77\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.107634 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51446a4f-e443-47ee-9ca8-a67fdaf62a7e-signing-key\") pod \"service-ca-9c57cc56f-frmph\" (UID: \"51446a4f-e443-47ee-9ca8-a67fdaf62a7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmph" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.107727 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c508e45a-d6fc-419c-960b-7603bf3209b2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ktdt7\" (UID: \"c508e45a-d6fc-419c-960b-7603bf3209b2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktdt7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.107780 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdq45\" (UniqueName: \"kubernetes.io/projected/8df9c601-f464-4501-8418-d4abbbe22f6b-kube-api-access-hdq45\") pod \"machine-config-controller-84d6567774-p5qm5\" (UID: \"8df9c601-f464-4501-8418-d4abbbe22f6b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.107934 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-bound-sa-token\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.108209 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcb02167-7185-4960-a665-fca3f7d2c220-config\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.108234 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fcb02167-7185-4960-a665-fca3f7d2c220-etcd-client\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.108294 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/61352dfb-6006-4c3f-b404-b32f8a54c08d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.108320 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ecc898c-2377-4e6f-a02e-028eeca5eec8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-j9w4z\" (UID: \"2ecc898c-2377-4e6f-a02e-028eeca5eec8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j9w4z" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.108380 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b1512e-af50-4bdd-8619-5bff9a4ce995-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-smmkq\" (UID: \"21b1512e-af50-4bdd-8619-5bff9a4ce995\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-smmkq" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.108403 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6sdb\" (UniqueName: \"kubernetes.io/projected/fcb02167-7185-4960-a665-fca3f7d2c220-kube-api-access-f6sdb\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.108425 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t7k7\" (UniqueName: \"kubernetes.io/projected/51446a4f-e443-47ee-9ca8-a67fdaf62a7e-kube-api-access-8t7k7\") pod \"service-ca-9c57cc56f-frmph\" (UID: \"51446a4f-e443-47ee-9ca8-a67fdaf62a7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmph" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.111453 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhjsx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.160387 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.192234 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6f8sx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.198306 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216188 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216326 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlggf\" (UniqueName: \"kubernetes.io/projected/a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0-kube-api-access-xlggf\") pod \"dns-default-q4j8x\" (UID: \"a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0\") " pod="openshift-dns/dns-default-q4j8x" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216362 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbpfb\" (UniqueName: \"kubernetes.io/projected/8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4-kube-api-access-bbpfb\") pod \"service-ca-operator-777779d784-2hgg7\" (UID: \"8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2hgg7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216381 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-registry-tls\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216399 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8df9c601-f464-4501-8418-d4abbbe22f6b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-p5qm5\" (UID: \"8df9c601-f464-4501-8418-d4abbbe22f6b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216423 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17abb21b-00f0-41dd-80a3-5d4cb9acc1e6-metrics-certs\") pod \"router-default-5444994796-ffwjx\" (UID: \"17abb21b-00f0-41dd-80a3-5d4cb9acc1e6\") " pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216459 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0421ca21-bf4e-4c89-9a3d-18a7603c1084-mountpoint-dir\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216490 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0421ca21-bf4e-4c89-9a3d-18a7603c1084-registration-dir\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216505 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxw6v\" (UniqueName: \"kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-kube-api-access-dxw6v\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216538 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c778877d-77ce-493e-a787-d0b76ff13a77-tmpfs\") pod \"packageserver-d55dfcdfc-rpng5\" (UID: \"c778877d-77ce-493e-a787-d0b76ff13a77\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216563 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebc0fbc2-11a3-48a6-9442-81ffacb1516a-proxy-tls\") pod \"machine-config-operator-74547568cd-bn4v9\" (UID: \"ebc0fbc2-11a3-48a6-9442-81ffacb1516a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216623 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6b4b05c-6a93-4e36-810e-bb0da0a20d55-serving-cert\") pod \"authentication-operator-69f744f599-xkkb6\" (UID: \"c6b4b05c-6a93-4e36-810e-bb0da0a20d55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216655 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a574e648-77e2-46a1-a2ad-af18e6e9ad57-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6kntj\" (UID: \"a574e648-77e2-46a1-a2ad-af18e6e9ad57\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216683 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c508e45a-d6fc-419c-960b-7603bf3209b2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ktdt7\" (UID: \"c508e45a-d6fc-419c-960b-7603bf3209b2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktdt7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216701 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0-metrics-tls\") pod \"dns-default-q4j8x\" (UID: \"a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0\") " pod="openshift-dns/dns-default-q4j8x" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216720 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-config-volume\") pod \"collect-profiles-29491785-vk98z\" (UID: \"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216736 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6b4b05c-6a93-4e36-810e-bb0da0a20d55-service-ca-bundle\") pod \"authentication-operator-69f744f599-xkkb6\" (UID: \"c6b4b05c-6a93-4e36-810e-bb0da0a20d55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216761 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51446a4f-e443-47ee-9ca8-a67fdaf62a7e-signing-cabundle\") pod \"service-ca-9c57cc56f-frmph\" (UID: \"51446a4f-e443-47ee-9ca8-a67fdaf62a7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmph" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216810 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jnqm\" (UniqueName: \"kubernetes.io/projected/ea00b67e-1ce1-40ce-be90-3f491e3c4ea9-kube-api-access-2jnqm\") pod \"ingress-canary-849z4\" (UID: \"ea00b67e-1ce1-40ce-be90-3f491e3c4ea9\") " pod="openshift-ingress-canary/ingress-canary-849z4" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216824 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8kwf\" (UniqueName: \"kubernetes.io/projected/0421ca21-bf4e-4c89-9a3d-18a7603c1084-kube-api-access-z8kwf\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216865 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/17abb21b-00f0-41dd-80a3-5d4cb9acc1e6-stats-auth\") pod \"router-default-5444994796-ffwjx\" (UID: \"17abb21b-00f0-41dd-80a3-5d4cb9acc1e6\") " pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216879 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a574e648-77e2-46a1-a2ad-af18e6e9ad57-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6kntj\" (UID: \"a574e648-77e2-46a1-a2ad-af18e6e9ad57\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216894 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ebc0fbc2-11a3-48a6-9442-81ffacb1516a-images\") pod \"machine-config-operator-74547568cd-bn4v9\" (UID: \"ebc0fbc2-11a3-48a6-9442-81ffacb1516a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216909 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c778877d-77ce-493e-a787-d0b76ff13a77-apiservice-cert\") pod \"packageserver-d55dfcdfc-rpng5\" (UID: \"c778877d-77ce-493e-a787-d0b76ff13a77\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216935 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q77tf\" (UniqueName: \"kubernetes.io/projected/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-kube-api-access-q77tf\") pod \"collect-profiles-29491785-vk98z\" (UID: \"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216958 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fcb02167-7185-4960-a665-fca3f7d2c220-etcd-service-ca\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216973 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-secret-volume\") pod \"collect-profiles-29491785-vk98z\" (UID: \"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.216999 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4-serving-cert\") pod \"service-ca-operator-777779d784-2hgg7\" (UID: \"8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2hgg7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217016 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebc0fbc2-11a3-48a6-9442-81ffacb1516a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bn4v9\" (UID: \"ebc0fbc2-11a3-48a6-9442-81ffacb1516a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217040 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61352dfb-6006-4c3f-b404-b32f8a54c08d-trusted-ca\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217055 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21b1512e-af50-4bdd-8619-5bff9a4ce995-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-smmkq\" (UID: \"21b1512e-af50-4bdd-8619-5bff9a4ce995\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-smmkq" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217071 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcb02167-7185-4960-a665-fca3f7d2c220-serving-cert\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217088 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c508e45a-d6fc-419c-960b-7603bf3209b2-config\") pod \"kube-controller-manager-operator-78b949d7b-ktdt7\" (UID: \"c508e45a-d6fc-419c-960b-7603bf3209b2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktdt7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8df9c601-f464-4501-8418-d4abbbe22f6b-proxy-tls\") pod \"machine-config-controller-84d6567774-p5qm5\" (UID: \"8df9c601-f464-4501-8418-d4abbbe22f6b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217164 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4ww4\" (UniqueName: \"kubernetes.io/projected/21b1512e-af50-4bdd-8619-5bff9a4ce995-kube-api-access-v4ww4\") pod \"kube-storage-version-migrator-operator-b67b599dd-smmkq\" (UID: \"21b1512e-af50-4bdd-8619-5bff9a4ce995\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-smmkq" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217195 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4-config\") pod \"service-ca-operator-777779d784-2hgg7\" (UID: \"8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2hgg7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217214 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc8wr\" (UniqueName: \"kubernetes.io/projected/c778877d-77ce-493e-a787-d0b76ff13a77-kube-api-access-qc8wr\") pod \"packageserver-d55dfcdfc-rpng5\" (UID: \"c778877d-77ce-493e-a787-d0b76ff13a77\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217244 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51446a4f-e443-47ee-9ca8-a67fdaf62a7e-signing-key\") pod \"service-ca-9c57cc56f-frmph\" (UID: \"51446a4f-e443-47ee-9ca8-a67fdaf62a7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmph" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217280 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c508e45a-d6fc-419c-960b-7603bf3209b2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ktdt7\" (UID: \"c508e45a-d6fc-419c-960b-7603bf3209b2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktdt7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217306 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0-config-volume\") pod \"dns-default-q4j8x\" (UID: \"a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0\") " pod="openshift-dns/dns-default-q4j8x" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217358 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdq45\" (UniqueName: \"kubernetes.io/projected/8df9c601-f464-4501-8418-d4abbbe22f6b-kube-api-access-hdq45\") pod \"machine-config-controller-84d6567774-p5qm5\" (UID: \"8df9c601-f464-4501-8418-d4abbbe22f6b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217384 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0421ca21-bf4e-4c89-9a3d-18a7603c1084-plugins-dir\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217400 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0421ca21-bf4e-4c89-9a3d-18a7603c1084-csi-data-dir\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217415 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x8wt\" (UniqueName: \"kubernetes.io/projected/bbc58c2b-401d-4f85-b550-5bdaad4f7c8c-kube-api-access-6x8wt\") pod \"catalog-operator-68c6474976-xt5cf\" (UID: \"bbc58c2b-401d-4f85-b550-5bdaad4f7c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217430 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fcb02167-7185-4960-a665-fca3f7d2c220-etcd-client\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217455 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-bound-sa-token\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217471 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcb02167-7185-4960-a665-fca3f7d2c220-config\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217486 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/61352dfb-6006-4c3f-b404-b32f8a54c08d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217504 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ecc898c-2377-4e6f-a02e-028eeca5eec8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-j9w4z\" (UID: \"2ecc898c-2377-4e6f-a02e-028eeca5eec8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j9w4z" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217529 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0421ca21-bf4e-4c89-9a3d-18a7603c1084-socket-dir\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217545 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b1512e-af50-4bdd-8619-5bff9a4ce995-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-smmkq\" (UID: \"21b1512e-af50-4bdd-8619-5bff9a4ce995\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-smmkq" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217561 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6sdb\" (UniqueName: \"kubernetes.io/projected/fcb02167-7185-4960-a665-fca3f7d2c220-kube-api-access-f6sdb\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217576 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t7k7\" (UniqueName: \"kubernetes.io/projected/51446a4f-e443-47ee-9ca8-a67fdaf62a7e-kube-api-access-8t7k7\") pod \"service-ca-9c57cc56f-frmph\" (UID: \"51446a4f-e443-47ee-9ca8-a67fdaf62a7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmph" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217594 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bbc58c2b-401d-4f85-b550-5bdaad4f7c8c-srv-cert\") pod \"catalog-operator-68c6474976-xt5cf\" (UID: \"bbc58c2b-401d-4f85-b550-5bdaad4f7c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217629 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17abb21b-00f0-41dd-80a3-5d4cb9acc1e6-service-ca-bundle\") pod \"router-default-5444994796-ffwjx\" (UID: \"17abb21b-00f0-41dd-80a3-5d4cb9acc1e6\") " pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217645 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6b4b05c-6a93-4e36-810e-bb0da0a20d55-config\") pod \"authentication-operator-69f744f599-xkkb6\" (UID: \"c6b4b05c-6a93-4e36-810e-bb0da0a20d55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217681 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn5gr\" (UniqueName: \"kubernetes.io/projected/17abb21b-00f0-41dd-80a3-5d4cb9acc1e6-kube-api-access-qn5gr\") pod \"router-default-5444994796-ffwjx\" (UID: \"17abb21b-00f0-41dd-80a3-5d4cb9acc1e6\") " pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217698 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6b4b05c-6a93-4e36-810e-bb0da0a20d55-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xkkb6\" (UID: \"c6b4b05c-6a93-4e36-810e-bb0da0a20d55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217714 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/61352dfb-6006-4c3f-b404-b32f8a54c08d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217730 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpwjz\" (UniqueName: \"kubernetes.io/projected/c6b4b05c-6a93-4e36-810e-bb0da0a20d55-kube-api-access-hpwjz\") pod \"authentication-operator-69f744f599-xkkb6\" (UID: \"c6b4b05c-6a93-4e36-810e-bb0da0a20d55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217766 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw65n\" (UniqueName: \"kubernetes.io/projected/2ecc898c-2377-4e6f-a02e-028eeca5eec8-kube-api-access-lw65n\") pod \"control-plane-machine-set-operator-78cbb6b69f-j9w4z\" (UID: \"2ecc898c-2377-4e6f-a02e-028eeca5eec8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j9w4z" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217784 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tks29\" (UniqueName: \"kubernetes.io/projected/ebc0fbc2-11a3-48a6-9442-81ffacb1516a-kube-api-access-tks29\") pod \"machine-config-operator-74547568cd-bn4v9\" (UID: \"ebc0fbc2-11a3-48a6-9442-81ffacb1516a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217800 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bbc58c2b-401d-4f85-b550-5bdaad4f7c8c-profile-collector-cert\") pod \"catalog-operator-68c6474976-xt5cf\" (UID: \"bbc58c2b-401d-4f85-b550-5bdaad4f7c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.217822 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fcb02167-7185-4960-a665-fca3f7d2c220-etcd-ca\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.218014 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c778877d-77ce-493e-a787-d0b76ff13a77-webhook-cert\") pod \"packageserver-d55dfcdfc-rpng5\" (UID: \"c778877d-77ce-493e-a787-d0b76ff13a77\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.218038 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea00b67e-1ce1-40ce-be90-3f491e3c4ea9-cert\") pod \"ingress-canary-849z4\" (UID: \"ea00b67e-1ce1-40ce-be90-3f491e3c4ea9\") " pod="openshift-ingress-canary/ingress-canary-849z4" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.218062 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/61352dfb-6006-4c3f-b404-b32f8a54c08d-registry-certificates\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.218079 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsdbj\" (UniqueName: \"kubernetes.io/projected/a574e648-77e2-46a1-a2ad-af18e6e9ad57-kube-api-access-lsdbj\") pod \"marketplace-operator-79b997595-6kntj\" (UID: \"a574e648-77e2-46a1-a2ad-af18e6e9ad57\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.218094 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/17abb21b-00f0-41dd-80a3-5d4cb9acc1e6-default-certificate\") pod \"router-default-5444994796-ffwjx\" (UID: \"17abb21b-00f0-41dd-80a3-5d4cb9acc1e6\") " pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.230378 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8df9c601-f464-4501-8418-d4abbbe22f6b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-p5qm5\" (UID: \"8df9c601-f464-4501-8418-d4abbbe22f6b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.234759 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/61352dfb-6006-4c3f-b404-b32f8a54c08d-registry-certificates\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.234989 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c778877d-77ce-493e-a787-d0b76ff13a77-tmpfs\") pod \"packageserver-d55dfcdfc-rpng5\" (UID: \"c778877d-77ce-493e-a787-d0b76ff13a77\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.235090 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/61352dfb-6006-4c3f-b404-b32f8a54c08d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.236558 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4-config\") pod \"service-ca-operator-777779d784-2hgg7\" (UID: \"8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2hgg7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.237796 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/51446a4f-e443-47ee-9ca8-a67fdaf62a7e-signing-cabundle\") pod \"service-ca-9c57cc56f-frmph\" (UID: \"51446a4f-e443-47ee-9ca8-a67fdaf62a7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmph" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.239407 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a574e648-77e2-46a1-a2ad-af18e6e9ad57-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6kntj\" (UID: \"a574e648-77e2-46a1-a2ad-af18e6e9ad57\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.239750 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c508e45a-d6fc-419c-960b-7603bf3209b2-config\") pod \"kube-controller-manager-operator-78b949d7b-ktdt7\" (UID: \"c508e45a-d6fc-419c-960b-7603bf3209b2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktdt7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.240263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcb02167-7185-4960-a665-fca3f7d2c220-config\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.241351 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17abb21b-00f0-41dd-80a3-5d4cb9acc1e6-service-ca-bundle\") pod \"router-default-5444994796-ffwjx\" (UID: \"17abb21b-00f0-41dd-80a3-5d4cb9acc1e6\") " pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.241581 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b1512e-af50-4bdd-8619-5bff9a4ce995-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-smmkq\" (UID: \"21b1512e-af50-4bdd-8619-5bff9a4ce995\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-smmkq" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.243780 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fcb02167-7185-4960-a665-fca3f7d2c220-etcd-service-ca\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.250035 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebc0fbc2-11a3-48a6-9442-81ffacb1516a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bn4v9\" (UID: \"ebc0fbc2-11a3-48a6-9442-81ffacb1516a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.250706 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ebc0fbc2-11a3-48a6-9442-81ffacb1516a-images\") pod \"machine-config-operator-74547568cd-bn4v9\" (UID: \"ebc0fbc2-11a3-48a6-9442-81ffacb1516a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.250978 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fcb02167-7185-4960-a665-fca3f7d2c220-etcd-ca\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.252503 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fcb02167-7185-4960-a665-fca3f7d2c220-etcd-client\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.255985 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dpsrp"] Jan 27 09:56:14 crc kubenswrapper[4869]: E0127 09:56:14.256339 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:14.756312787 +0000 UTC m=+143.376736880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.260112 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61352dfb-6006-4c3f-b404-b32f8a54c08d-trusted-ca\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.261755 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8df9c601-f464-4501-8418-d4abbbe22f6b-proxy-tls\") pod \"machine-config-controller-84d6567774-p5qm5\" (UID: \"8df9c601-f464-4501-8418-d4abbbe22f6b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.266726 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c778877d-77ce-493e-a787-d0b76ff13a77-webhook-cert\") pod \"packageserver-d55dfcdfc-rpng5\" (UID: \"c778877d-77ce-493e-a787-d0b76ff13a77\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.286890 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcb02167-7185-4960-a665-fca3f7d2c220-serving-cert\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.287235 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebc0fbc2-11a3-48a6-9442-81ffacb1516a-proxy-tls\") pod \"machine-config-operator-74547568cd-bn4v9\" (UID: \"ebc0fbc2-11a3-48a6-9442-81ffacb1516a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.288450 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2ecc898c-2377-4e6f-a02e-028eeca5eec8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-j9w4z\" (UID: \"2ecc898c-2377-4e6f-a02e-028eeca5eec8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j9w4z" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.288506 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4-serving-cert\") pod \"service-ca-operator-777779d784-2hgg7\" (UID: \"8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2hgg7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.288513 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/17abb21b-00f0-41dd-80a3-5d4cb9acc1e6-stats-auth\") pod \"router-default-5444994796-ffwjx\" (UID: \"17abb21b-00f0-41dd-80a3-5d4cb9acc1e6\") " pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.288952 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/17abb21b-00f0-41dd-80a3-5d4cb9acc1e6-default-certificate\") pod \"router-default-5444994796-ffwjx\" (UID: \"17abb21b-00f0-41dd-80a3-5d4cb9acc1e6\") " pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.289032 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c778877d-77ce-493e-a787-d0b76ff13a77-apiservice-cert\") pod \"packageserver-d55dfcdfc-rpng5\" (UID: \"c778877d-77ce-493e-a787-d0b76ff13a77\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.289049 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/51446a4f-e443-47ee-9ca8-a67fdaf62a7e-signing-key\") pod \"service-ca-9c57cc56f-frmph\" (UID: \"51446a4f-e443-47ee-9ca8-a67fdaf62a7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmph" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.289461 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/17abb21b-00f0-41dd-80a3-5d4cb9acc1e6-metrics-certs\") pod \"router-default-5444994796-ffwjx\" (UID: \"17abb21b-00f0-41dd-80a3-5d4cb9acc1e6\") " pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.290675 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn5gr\" (UniqueName: \"kubernetes.io/projected/17abb21b-00f0-41dd-80a3-5d4cb9acc1e6-kube-api-access-qn5gr\") pod \"router-default-5444994796-ffwjx\" (UID: \"17abb21b-00f0-41dd-80a3-5d4cb9acc1e6\") " pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.292754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/61352dfb-6006-4c3f-b404-b32f8a54c08d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.293205 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-registry-tls\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.295251 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn"] Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.303635 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/21b1512e-af50-4bdd-8619-5bff9a4ce995-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-smmkq\" (UID: \"21b1512e-af50-4bdd-8619-5bff9a4ce995\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-smmkq" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.304484 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a574e648-77e2-46a1-a2ad-af18e6e9ad57-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6kntj\" (UID: \"a574e648-77e2-46a1-a2ad-af18e6e9ad57\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.307824 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gk8wd"] Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.312949 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c508e45a-d6fc-419c-960b-7603bf3209b2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ktdt7\" (UID: \"c508e45a-d6fc-419c-960b-7603bf3209b2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktdt7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.319970 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0421ca21-bf4e-4c89-9a3d-18a7603c1084-mountpoint-dir\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320020 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0421ca21-bf4e-4c89-9a3d-18a7603c1084-registration-dir\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320080 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6b4b05c-6a93-4e36-810e-bb0da0a20d55-serving-cert\") pod \"authentication-operator-69f744f599-xkkb6\" (UID: \"c6b4b05c-6a93-4e36-810e-bb0da0a20d55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320139 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0-metrics-tls\") pod \"dns-default-q4j8x\" (UID: \"a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0\") " pod="openshift-dns/dns-default-q4j8x" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320170 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-config-volume\") pod \"collect-profiles-29491785-vk98z\" (UID: \"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320197 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6b4b05c-6a93-4e36-810e-bb0da0a20d55-service-ca-bundle\") pod \"authentication-operator-69f744f599-xkkb6\" (UID: \"c6b4b05c-6a93-4e36-810e-bb0da0a20d55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320239 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jnqm\" (UniqueName: \"kubernetes.io/projected/ea00b67e-1ce1-40ce-be90-3f491e3c4ea9-kube-api-access-2jnqm\") pod \"ingress-canary-849z4\" (UID: \"ea00b67e-1ce1-40ce-be90-3f491e3c4ea9\") " pod="openshift-ingress-canary/ingress-canary-849z4" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320265 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8kwf\" (UniqueName: \"kubernetes.io/projected/0421ca21-bf4e-4c89-9a3d-18a7603c1084-kube-api-access-z8kwf\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320302 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320332 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q77tf\" (UniqueName: \"kubernetes.io/projected/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-kube-api-access-q77tf\") pod \"collect-profiles-29491785-vk98z\" (UID: \"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320360 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-secret-volume\") pod \"collect-profiles-29491785-vk98z\" (UID: \"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320453 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0-config-volume\") pod \"dns-default-q4j8x\" (UID: \"a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0\") " pod="openshift-dns/dns-default-q4j8x" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320493 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0421ca21-bf4e-4c89-9a3d-18a7603c1084-plugins-dir\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320520 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0421ca21-bf4e-4c89-9a3d-18a7603c1084-csi-data-dir\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320549 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x8wt\" (UniqueName: \"kubernetes.io/projected/bbc58c2b-401d-4f85-b550-5bdaad4f7c8c-kube-api-access-6x8wt\") pod \"catalog-operator-68c6474976-xt5cf\" (UID: \"bbc58c2b-401d-4f85-b550-5bdaad4f7c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320593 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0421ca21-bf4e-4c89-9a3d-18a7603c1084-socket-dir\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320684 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bbc58c2b-401d-4f85-b550-5bdaad4f7c8c-srv-cert\") pod \"catalog-operator-68c6474976-xt5cf\" (UID: \"bbc58c2b-401d-4f85-b550-5bdaad4f7c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320715 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6b4b05c-6a93-4e36-810e-bb0da0a20d55-config\") pod \"authentication-operator-69f744f599-xkkb6\" (UID: \"c6b4b05c-6a93-4e36-810e-bb0da0a20d55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320751 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6b4b05c-6a93-4e36-810e-bb0da0a20d55-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xkkb6\" (UID: \"c6b4b05c-6a93-4e36-810e-bb0da0a20d55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320781 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpwjz\" (UniqueName: \"kubernetes.io/projected/c6b4b05c-6a93-4e36-810e-bb0da0a20d55-kube-api-access-hpwjz\") pod \"authentication-operator-69f744f599-xkkb6\" (UID: \"c6b4b05c-6a93-4e36-810e-bb0da0a20d55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320848 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bbc58c2b-401d-4f85-b550-5bdaad4f7c8c-profile-collector-cert\") pod \"catalog-operator-68c6474976-xt5cf\" (UID: \"bbc58c2b-401d-4f85-b550-5bdaad4f7c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320873 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea00b67e-1ce1-40ce-be90-3f491e3c4ea9-cert\") pod \"ingress-canary-849z4\" (UID: \"ea00b67e-1ce1-40ce-be90-3f491e3c4ea9\") " pod="openshift-ingress-canary/ingress-canary-849z4" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.320912 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlggf\" (UniqueName: \"kubernetes.io/projected/a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0-kube-api-access-xlggf\") pod \"dns-default-q4j8x\" (UID: \"a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0\") " pod="openshift-dns/dns-default-q4j8x" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.321675 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0421ca21-bf4e-4c89-9a3d-18a7603c1084-mountpoint-dir\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.321912 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0421ca21-bf4e-4c89-9a3d-18a7603c1084-registration-dir\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.322285 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6b4b05c-6a93-4e36-810e-bb0da0a20d55-service-ca-bundle\") pod \"authentication-operator-69f744f599-xkkb6\" (UID: \"c6b4b05c-6a93-4e36-810e-bb0da0a20d55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.322754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-config-volume\") pod \"collect-profiles-29491785-vk98z\" (UID: \"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.322888 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0-config-volume\") pod \"dns-default-q4j8x\" (UID: \"a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0\") " pod="openshift-dns/dns-default-q4j8x" Jan 27 09:56:14 crc kubenswrapper[4869]: E0127 09:56:14.323015 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:14.822999447 +0000 UTC m=+143.443423530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.323320 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6b4b05c-6a93-4e36-810e-bb0da0a20d55-config\") pod \"authentication-operator-69f744f599-xkkb6\" (UID: \"c6b4b05c-6a93-4e36-810e-bb0da0a20d55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.323412 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0421ca21-bf4e-4c89-9a3d-18a7603c1084-plugins-dir\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.323485 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0421ca21-bf4e-4c89-9a3d-18a7603c1084-socket-dir\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.323487 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0421ca21-bf4e-4c89-9a3d-18a7603c1084-csi-data-dir\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.333902 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6b4b05c-6a93-4e36-810e-bb0da0a20d55-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xkkb6\" (UID: \"c6b4b05c-6a93-4e36-810e-bb0da0a20d55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.337288 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thwpk"] Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.337333 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rnv4g"] Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.339585 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4ww4\" (UniqueName: \"kubernetes.io/projected/21b1512e-af50-4bdd-8619-5bff9a4ce995-kube-api-access-v4ww4\") pod \"kube-storage-version-migrator-operator-b67b599dd-smmkq\" (UID: \"21b1512e-af50-4bdd-8619-5bff9a4ce995\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-smmkq" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.341102 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsdbj\" (UniqueName: \"kubernetes.io/projected/a574e648-77e2-46a1-a2ad-af18e6e9ad57-kube-api-access-lsdbj\") pod \"marketplace-operator-79b997595-6kntj\" (UID: \"a574e648-77e2-46a1-a2ad-af18e6e9ad57\") " pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.341653 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bbc58c2b-401d-4f85-b550-5bdaad4f7c8c-profile-collector-cert\") pod \"catalog-operator-68c6474976-xt5cf\" (UID: \"bbc58c2b-401d-4f85-b550-5bdaad4f7c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.341686 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc8wr\" (UniqueName: \"kubernetes.io/projected/c778877d-77ce-493e-a787-d0b76ff13a77-kube-api-access-qc8wr\") pod \"packageserver-d55dfcdfc-rpng5\" (UID: \"c778877d-77ce-493e-a787-d0b76ff13a77\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.343807 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0-metrics-tls\") pod \"dns-default-q4j8x\" (UID: \"a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0\") " pod="openshift-dns/dns-default-q4j8x" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.346459 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea00b67e-1ce1-40ce-be90-3f491e3c4ea9-cert\") pod \"ingress-canary-849z4\" (UID: \"ea00b67e-1ce1-40ce-be90-3f491e3c4ea9\") " pod="openshift-ingress-canary/ingress-canary-849z4" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.351512 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6b4b05c-6a93-4e36-810e-bb0da0a20d55-serving-cert\") pod \"authentication-operator-69f744f599-xkkb6\" (UID: \"c6b4b05c-6a93-4e36-810e-bb0da0a20d55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.351855 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxw6v\" (UniqueName: \"kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-kube-api-access-dxw6v\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.352219 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-secret-volume\") pod \"collect-profiles-29491785-vk98z\" (UID: \"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.374845 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c508e45a-d6fc-419c-960b-7603bf3209b2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ktdt7\" (UID: \"c508e45a-d6fc-419c-960b-7603bf3209b2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktdt7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.374984 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdq45\" (UniqueName: \"kubernetes.io/projected/8df9c601-f464-4501-8418-d4abbbe22f6b-kube-api-access-hdq45\") pod \"machine-config-controller-84d6567774-p5qm5\" (UID: \"8df9c601-f464-4501-8418-d4abbbe22f6b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.375352 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bbc58c2b-401d-4f85-b550-5bdaad4f7c8c-srv-cert\") pod \"catalog-operator-68c6474976-xt5cf\" (UID: \"bbc58c2b-401d-4f85-b550-5bdaad4f7c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.400638 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-bound-sa-token\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.407216 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw65n\" (UniqueName: \"kubernetes.io/projected/2ecc898c-2377-4e6f-a02e-028eeca5eec8-kube-api-access-lw65n\") pod \"control-plane-machine-set-operator-78cbb6b69f-j9w4z\" (UID: \"2ecc898c-2377-4e6f-a02e-028eeca5eec8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j9w4z" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.425375 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:14 crc kubenswrapper[4869]: E0127 09:56:14.425772 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:14.925757542 +0000 UTC m=+143.546181625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.435011 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.436161 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6sdb\" (UniqueName: \"kubernetes.io/projected/fcb02167-7185-4960-a665-fca3f7d2c220-kube-api-access-f6sdb\") pod \"etcd-operator-b45778765-prtqz\" (UID: \"fcb02167-7185-4960-a665-fca3f7d2c220\") " pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.443114 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-smmkq" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.446862 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t7k7\" (UniqueName: \"kubernetes.io/projected/51446a4f-e443-47ee-9ca8-a67fdaf62a7e-kube-api-access-8t7k7\") pod \"service-ca-9c57cc56f-frmph\" (UID: \"51446a4f-e443-47ee-9ca8-a67fdaf62a7e\") " pod="openshift-service-ca/service-ca-9c57cc56f-frmph" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.449928 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.468401 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tks29\" (UniqueName: \"kubernetes.io/projected/ebc0fbc2-11a3-48a6-9442-81ffacb1516a-kube-api-access-tks29\") pod \"machine-config-operator-74547568cd-bn4v9\" (UID: \"ebc0fbc2-11a3-48a6-9442-81ffacb1516a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.475333 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktdt7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.486290 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j9w4z" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.520338 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-frmph" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.529970 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 09:51:13 +0000 UTC, rotation deadline is 2026-11-24 03:21:58.8237875 +0000 UTC Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.530013 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7217h25m44.293776573s for next certificate rotation Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.530206 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.530549 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: E0127 09:56:14.530913 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:15.030901049 +0000 UTC m=+143.651325132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.538404 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbpfb\" (UniqueName: \"kubernetes.io/projected/8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4-kube-api-access-bbpfb\") pod \"service-ca-operator-777779d784-2hgg7\" (UID: \"8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2hgg7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.548328 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.554286 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jnqm\" (UniqueName: \"kubernetes.io/projected/ea00b67e-1ce1-40ce-be90-3f491e3c4ea9-kube-api-access-2jnqm\") pod \"ingress-canary-849z4\" (UID: \"ea00b67e-1ce1-40ce-be90-3f491e3c4ea9\") " pod="openshift-ingress-canary/ingress-canary-849z4" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.564018 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.579767 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q77tf\" (UniqueName: \"kubernetes.io/projected/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-kube-api-access-q77tf\") pod \"collect-profiles-29491785-vk98z\" (UID: \"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.580105 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.588748 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8kwf\" (UniqueName: \"kubernetes.io/projected/0421ca21-bf4e-4c89-9a3d-18a7603c1084-kube-api-access-z8kwf\") pod \"csi-hostpathplugin-jcj5k\" (UID: \"0421ca21-bf4e-4c89-9a3d-18a7603c1084\") " pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.599473 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x8wt\" (UniqueName: \"kubernetes.io/projected/bbc58c2b-401d-4f85-b550-5bdaad4f7c8c-kube-api-access-6x8wt\") pod \"catalog-operator-68c6474976-xt5cf\" (UID: \"bbc58c2b-401d-4f85-b550-5bdaad4f7c8c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.610609 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlggf\" (UniqueName: \"kubernetes.io/projected/a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0-kube-api-access-xlggf\") pod \"dns-default-q4j8x\" (UID: \"a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0\") " pod="openshift-dns/dns-default-q4j8x" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.637339 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:14 crc kubenswrapper[4869]: E0127 09:56:14.637640 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:15.137621701 +0000 UTC m=+143.758045784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.646071 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpwjz\" (UniqueName: \"kubernetes.io/projected/c6b4b05c-6a93-4e36-810e-bb0da0a20d55-kube-api-access-hpwjz\") pod \"authentication-operator-69f744f599-xkkb6\" (UID: \"c6b4b05c-6a93-4e36-810e-bb0da0a20d55\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.648036 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.657009 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-849z4" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.740227 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: E0127 09:56:14.740501 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:15.240488841 +0000 UTC m=+143.860912924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.766548 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.813439 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2hgg7" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.845119 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:14 crc kubenswrapper[4869]: E0127 09:56:14.845439 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:15.345419948 +0000 UTC m=+143.965844031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.849249 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thwpk" event={"ID":"121d9a3b-d369-4245-84ec-3efeb902ccd8","Type":"ContainerStarted","Data":"d369c4d3922ad6b9da6c2e59bbc9ba12239114e26c0a9954656c757b5203a090"} Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.876586 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" event={"ID":"17cbc9af-17b4-4815-b527-9d9d9c5112fc","Type":"ContainerStarted","Data":"1091c6e1be645b0352ee4de7554eb1fb9b1396b9c0c4ff8ff83edb7a7be5d3dc"} Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.876628 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" event={"ID":"17cbc9af-17b4-4815-b527-9d9d9c5112fc","Type":"ContainerStarted","Data":"2ca11e1802f9fdec39c36a521f2fea78195615800b88319d01d88cafb34f52a3"} Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.876818 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.877237 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.888307 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.888620 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dpsrp" event={"ID":"493c38dc-c859-4715-b97f-be1388ee2162","Type":"ContainerStarted","Data":"b93ea3aa2f1e9850631359ab0056c9f618d9b9b25172e394da8f8aa83e60b1e6"} Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.889933 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-clff8"] Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.895671 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-q4j8x" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.909660 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qj9jg" event={"ID":"7a44818f-a388-4dcb-93f4-b781c1f7bf16","Type":"ContainerStarted","Data":"14f459d943e2cfdc42f2186b05f325b7713614ce257e1d58201c30d94cba6e36"} Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.915523 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-d2ml5" event={"ID":"1718eaa3-1d2b-46a0-b43d-e6408e75d53a","Type":"ContainerStarted","Data":"5ad8f64888b1fbc7408cc892c695c8c7c2e2c346321af969ebf4af7be153bb09"} Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.915566 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-d2ml5" event={"ID":"1718eaa3-1d2b-46a0-b43d-e6408e75d53a","Type":"ContainerStarted","Data":"1ffc9dcee070591fd1aa4cdef75353daf263d02fec1c6af02ca2799f3d1813eb"} Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.922254 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" event={"ID":"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3","Type":"ContainerStarted","Data":"f8a9906b6c89f27dde8b80c01f245ce7d6c3e476b0871aae0f66773cdc3a7c66"} Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.929482 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" event={"ID":"89c725c4-90e8-4965-b48d-89f3d2771faf","Type":"ContainerStarted","Data":"e47f442d5f20bda9753f2a78141a9a70c13da64a9f6d2e48504e5e50f66b26a5"} Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.930647 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn" event={"ID":"6a00c17d-c0fe-49a3-921a-2c19dcea3274","Type":"ContainerStarted","Data":"0840cb1af3a658d400af5b7b203a6520f42b1cb1abf7f2ffa4a5d4a6722f7fc4"} Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.934705 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ffwjx" event={"ID":"17abb21b-00f0-41dd-80a3-5d4cb9acc1e6","Type":"ContainerStarted","Data":"3ef15afee6322e1418334594d8fc63a649b05f00b9eda636343923113892694d"} Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.937265 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qvjjk" event={"ID":"b10a171e-2958-45c1-9a6d-c8c14a7a24ae","Type":"ContainerStarted","Data":"47a531781fea37c870f94cddc81b536d61fc0e6d0cb38ddcd3f44b1194b6d6fe"} Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.938785 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gk8wd" event={"ID":"0347f639-0210-4f2c-99de-915830c86a6d","Type":"ContainerStarted","Data":"eeabcc4caff7341a131100b111bd4507e3afeced91179d7557b9ebccd446720d"} Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.946184 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.947368 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q86c4" event={"ID":"b6851779-1393-4518-be8b-519296708bd7","Type":"ContainerStarted","Data":"0d63e854ffe4c4dfec2ef132ae243942f960bac69c5ca0f8541ecf16d9ea9a48"} Jan 27 09:56:14 crc kubenswrapper[4869]: E0127 09:56:14.947658 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:15.447643187 +0000 UTC m=+144.068067270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.960356 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vwhlz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 27 09:56:14 crc kubenswrapper[4869]: I0127 09:56:14.960412 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" podUID="17cbc9af-17b4-4815-b527-9d9d9c5112fc" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.039531 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-w8hng"] Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.047602 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:15 crc kubenswrapper[4869]: E0127 09:56:15.048066 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:15.54804587 +0000 UTC m=+144.168469953 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.115036 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59"] Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.149365 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:15 crc kubenswrapper[4869]: E0127 09:56:15.151112 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:15.651085098 +0000 UTC m=+144.271509181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.250141 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:15 crc kubenswrapper[4869]: E0127 09:56:15.250727 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:15.750699654 +0000 UTC m=+144.371123737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.250814 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:15 crc kubenswrapper[4869]: E0127 09:56:15.251178 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:15.751164147 +0000 UTC m=+144.371588230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.351533 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:15 crc kubenswrapper[4869]: E0127 09:56:15.351869 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:15.851854773 +0000 UTC m=+144.472278856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.452632 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:15 crc kubenswrapper[4869]: E0127 09:56:15.453033 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:15.953018473 +0000 UTC m=+144.573442556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.553436 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:15 crc kubenswrapper[4869]: E0127 09:56:15.554017 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:16.054000972 +0000 UTC m=+144.674425055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.592348 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qf4jk"] Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.651379 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-d2ml5" podStartSLOduration=4.651364183 podStartE2EDuration="4.651364183s" podCreationTimestamp="2026-01-27 09:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:15.649323796 +0000 UTC m=+144.269747879" watchObservedRunningTime="2026-01-27 09:56:15.651364183 +0000 UTC m=+144.271788256" Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.661271 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:15 crc kubenswrapper[4869]: E0127 09:56:15.661585 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:16.161574705 +0000 UTC m=+144.781998788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.670203 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dwt4c"] Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.698663 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.698719 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.705354 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5"] Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.762197 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:15 crc kubenswrapper[4869]: E0127 09:56:15.762932 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:16.262902192 +0000 UTC m=+144.883326275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.807314 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hrflq"] Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.839331 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-q86c4" podStartSLOduration=124.839311792 podStartE2EDuration="2m4.839311792s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:15.838290783 +0000 UTC m=+144.458714856" watchObservedRunningTime="2026-01-27 09:56:15.839311792 +0000 UTC m=+144.459735875" Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.864094 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:15 crc kubenswrapper[4869]: E0127 09:56:15.864537 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:16.364517473 +0000 UTC m=+144.984941556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.959545 4869 generic.go:334] "Generic (PLEG): container finished" podID="6a00c17d-c0fe-49a3-921a-2c19dcea3274" containerID="0badffbc782fe974a6acf0cabbef42e95223ce4e6fed9ba96181b30a34073f15" exitCode=0 Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.959934 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn" event={"ID":"6a00c17d-c0fe-49a3-921a-2c19dcea3274","Type":"ContainerDied","Data":"0badffbc782fe974a6acf0cabbef42e95223ce4e6fed9ba96181b30a34073f15"} Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.964654 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:15 crc kubenswrapper[4869]: E0127 09:56:15.965156 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:16.465134616 +0000 UTC m=+145.085558699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.966092 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" podStartSLOduration=124.96607278 podStartE2EDuration="2m4.96607278s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:15.924409762 +0000 UTC m=+144.544833835" watchObservedRunningTime="2026-01-27 09:56:15.96607278 +0000 UTC m=+144.586496863" Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.972088 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qj9jg" event={"ID":"7a44818f-a388-4dcb-93f4-b781c1f7bf16","Type":"ContainerStarted","Data":"2caa375e163ca606a7174e2fe61840b6d7976612fb107111450996096b0d99b7"} Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.972142 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qj9jg" event={"ID":"7a44818f-a388-4dcb-93f4-b781c1f7bf16","Type":"ContainerStarted","Data":"1f2f6e8410c23ed1b46a818d20929fb64da7f895d063bf94951544c865e8a4e2"} Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.992165 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg"] Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.995507 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dpsrp" event={"ID":"493c38dc-c859-4715-b97f-be1388ee2162","Type":"ContainerStarted","Data":"150d0968fe1e371a2d4bb819024cf97c83dd8494d89b5359e853e42bc5bedd0b"} Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.995814 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-dpsrp" Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.998619 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-dpsrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 09:56:15 crc kubenswrapper[4869]: I0127 09:56:15.998732 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dpsrp" podUID="493c38dc-c859-4715-b97f-be1388ee2162" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.037097 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gk8wd" event={"ID":"0347f639-0210-4f2c-99de-915830c86a6d","Type":"ContainerStarted","Data":"6b295910922433efd2d029378439b186daf1511fcf4e4b6a7ac0964b0f18b2fb"} Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.048194 4869 patch_prober.go:28] interesting pod/console-operator-58897d9998-gk8wd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.048239 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gk8wd" podUID="0347f639-0210-4f2c-99de-915830c86a6d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.066054 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:16 crc kubenswrapper[4869]: E0127 09:56:16.066499 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:16.566485773 +0000 UTC m=+145.186909856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.076439 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v5prp" podStartSLOduration=125.076421983 podStartE2EDuration="2m5.076421983s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:16.039406364 +0000 UTC m=+144.659830437" watchObservedRunningTime="2026-01-27 09:56:16.076421983 +0000 UTC m=+144.696846066" Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.077475 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" podStartSLOduration=124.077469593 podStartE2EDuration="2m4.077469593s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:16.076169191 +0000 UTC m=+144.696593274" watchObservedRunningTime="2026-01-27 09:56:16.077469593 +0000 UTC m=+144.697893676" Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.102161 4869 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rnv4g container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.102521 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" podUID="a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.102815 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-qvjjk" podStartSLOduration=124.102795549 podStartE2EDuration="2m4.102795549s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:16.102444443 +0000 UTC m=+144.722868526" watchObservedRunningTime="2026-01-27 09:56:16.102795549 +0000 UTC m=+144.723219632" Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.156622 4869 generic.go:334] "Generic (PLEG): container finished" podID="89c725c4-90e8-4965-b48d-89f3d2771faf" containerID="39fa9d0549d680c7b02b2da6a24a537949bb79025aefa3e70db9c4afb869e912" exitCode=0 Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.167056 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:16 crc kubenswrapper[4869]: E0127 09:56:16.167369 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:16.667342148 +0000 UTC m=+145.287766231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.167574 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:16 crc kubenswrapper[4869]: E0127 09:56:16.168700 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:16.668692112 +0000 UTC m=+145.289116195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.243787 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-ffwjx" podStartSLOduration=124.243769279 podStartE2EDuration="2m4.243769279s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:16.199657325 +0000 UTC m=+144.820081408" watchObservedRunningTime="2026-01-27 09:56:16.243769279 +0000 UTC m=+144.864193372" Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.268748 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.269154 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gk8wd" Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.269197 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qf4jk" event={"ID":"55adfb11-256b-4dd4-ba09-00ffd68f6e5e","Type":"ContainerStarted","Data":"e876250b57a092acadd2784a0798a88e10df0f91548439fc98397304ecbd486d"} Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.269221 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.269255 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.269273 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-w8hng" event={"ID":"b05e9e31-f26d-4358-a644-796cd3fea7a8","Type":"ContainerStarted","Data":"b355e5aae6e145ca82095d3dcb83883d82738ee359107c41e2c0d6fe080e8ecc"} Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.269286 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" event={"ID":"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3","Type":"ContainerStarted","Data":"66f15107b2c646be50a91c143afe7c055cf67ae86a7fa002c5a7380957c4e54e"} Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.269298 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thwpk" event={"ID":"121d9a3b-d369-4245-84ec-3efeb902ccd8","Type":"ContainerStarted","Data":"660cf3b20868f55a0f336c68aea23024894863b61969bb8c2e5edb387a6d6495"} Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.269314 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59" event={"ID":"8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c","Type":"ContainerStarted","Data":"6fb39799648d92eed04da46286bca016cbb4353701a1adc311b971a8739dd0b2"} Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.269331 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ffwjx" event={"ID":"17abb21b-00f0-41dd-80a3-5d4cb9acc1e6","Type":"ContainerStarted","Data":"e1c243566632e75abeea8ba50d16bff61f2dc1929170ce4a068c4981f32c21fd"} Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.269345 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" event={"ID":"670d8b6b-95a2-4711-98db-3f71e295093b","Type":"ContainerStarted","Data":"b6584201befc174f369f079a27176d54cce2b5f07c48c5c0892669eb587e326c"} Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.269357 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" event={"ID":"670d8b6b-95a2-4711-98db-3f71e295093b","Type":"ContainerStarted","Data":"07304debb5770c2acd5eb2e08e310253f2f1501c7540a9140c5baeafcad27d19"} Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.269368 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" event={"ID":"89c725c4-90e8-4965-b48d-89f3d2771faf","Type":"ContainerDied","Data":"39fa9d0549d680c7b02b2da6a24a537949bb79025aefa3e70db9c4afb869e912"} Jan 27 09:56:16 crc kubenswrapper[4869]: E0127 09:56:16.270180 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:16.770162666 +0000 UTC m=+145.390586759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.278277 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gk8wd" podStartSLOduration=125.278260799 podStartE2EDuration="2m5.278260799s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:16.276642252 +0000 UTC m=+144.897066345" watchObservedRunningTime="2026-01-27 09:56:16.278260799 +0000 UTC m=+144.898684882" Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.370546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:16 crc kubenswrapper[4869]: E0127 09:56:16.371957 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:16.871946154 +0000 UTC m=+145.492370237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.407892 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" podStartSLOduration=125.407876201 podStartE2EDuration="2m5.407876201s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:16.366753739 +0000 UTC m=+144.987177822" watchObservedRunningTime="2026-01-27 09:56:16.407876201 +0000 UTC m=+145.028300284" Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.408854 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-thwpk" podStartSLOduration=125.408845258 podStartE2EDuration="2m5.408845258s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:16.407056903 +0000 UTC m=+145.027481006" watchObservedRunningTime="2026-01-27 09:56:16.408845258 +0000 UTC m=+145.029269341" Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.435785 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.439531 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-dpsrp" podStartSLOduration=125.439508766 podStartE2EDuration="2m5.439508766s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:16.435729268 +0000 UTC m=+145.056153351" watchObservedRunningTime="2026-01-27 09:56:16.439508766 +0000 UTC m=+145.059932859" Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.461938 4869 patch_prober.go:28] interesting pod/router-default-5444994796-ffwjx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 09:56:16 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 27 09:56:16 crc kubenswrapper[4869]: [+]process-running ok Jan 27 09:56:16 crc kubenswrapper[4869]: healthz check failed Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.461985 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffwjx" podUID="17abb21b-00f0-41dd-80a3-5d4cb9acc1e6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.471676 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:16 crc kubenswrapper[4869]: E0127 09:56:16.471847 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:16.971820993 +0000 UTC m=+145.592245076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.471985 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:16 crc kubenswrapper[4869]: E0127 09:56:16.472224 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:16.972215282 +0000 UTC m=+145.592639365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.544643 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-qj9jg" podStartSLOduration=125.544624673 podStartE2EDuration="2m5.544624673s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:16.496420765 +0000 UTC m=+145.116844848" watchObservedRunningTime="2026-01-27 09:56:16.544624673 +0000 UTC m=+145.165048756" Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.572457 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:16 crc kubenswrapper[4869]: E0127 09:56:16.572799 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:17.072785133 +0000 UTC m=+145.693209216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.607147 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6f8sx"] Jan 27 09:56:16 crc kubenswrapper[4869]: W0127 09:56:16.638511 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b8af0be_d73b_4b8e_b7a2_295834553924.slice/crio-9296e0fe25f8c66f68b952d25e2b3b4c62d915d1c26ec07de16c42e69bd40d97 WatchSource:0}: Error finding container 9296e0fe25f8c66f68b952d25e2b3b4c62d915d1c26ec07de16c42e69bd40d97: Status 404 returned error can't find the container with id 9296e0fe25f8c66f68b952d25e2b3b4c62d915d1c26ec07de16c42e69bd40d97 Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.649649 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-smmkq"] Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.659938 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-prtqz"] Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.674172 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:16 crc kubenswrapper[4869]: E0127 09:56:16.674452 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:17.174442525 +0000 UTC m=+145.794866608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:16 crc kubenswrapper[4869]: W0127 09:56:16.688546 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcb02167_7185_4960_a665_fca3f7d2c220.slice/crio-51923df05c6edc303b2ec157708d5fbb4b52f841c4760a28b15a4a13d3ed3e1e WatchSource:0}: Error finding container 51923df05c6edc303b2ec157708d5fbb4b52f841c4760a28b15a4a13d3ed3e1e: Status 404 returned error can't find the container with id 51923df05c6edc303b2ec157708d5fbb4b52f841c4760a28b15a4a13d3ed3e1e Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.730720 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhjsx"] Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.774973 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:16 crc kubenswrapper[4869]: E0127 09:56:16.775563 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:17.275534201 +0000 UTC m=+145.895958284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.876976 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:16 crc kubenswrapper[4869]: E0127 09:56:16.877557 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:17.37752645 +0000 UTC m=+145.997950533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.877777 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jcj5k"] Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.909161 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5"] Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.927999 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv"] Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.930640 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktdt7"] Jan 27 09:56:16 crc kubenswrapper[4869]: W0127 09:56:16.933782 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc778877d_77ce_493e_a787_d0b76ff13a77.slice/crio-95dedc38c6e3c0d580b0d99db0967b116fb566f0d645c08a48ff6e4956ffd5f8 WatchSource:0}: Error finding container 95dedc38c6e3c0d580b0d99db0967b116fb566f0d645c08a48ff6e4956ffd5f8: Status 404 returned error can't find the container with id 95dedc38c6e3c0d580b0d99db0967b116fb566f0d645c08a48ff6e4956ffd5f8 Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.943674 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf"] Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.962111 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z"] Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.972382 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j9w4z"] Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.977351 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9"] Jan 27 09:56:16 crc kubenswrapper[4869]: I0127 09:56:16.980323 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:16 crc kubenswrapper[4869]: E0127 09:56:16.980644 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:17.48063078 +0000 UTC m=+146.101054863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.083398 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:17 crc kubenswrapper[4869]: E0127 09:56:17.083701 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:17.583687449 +0000 UTC m=+146.204111532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.104163 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-frmph"] Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.108226 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kntj"] Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.115343 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-q4j8x"] Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.119925 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2hgg7"] Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.120399 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-849z4"] Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.122446 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9"] Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.149886 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xkkb6"] Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.163862 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktdt7" event={"ID":"c508e45a-d6fc-419c-960b-7603bf3209b2","Type":"ContainerStarted","Data":"329bf6ab225e28a75b91be05f7c04831e3c48077523588137c2bd081d6d117e7"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.165726 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5" event={"ID":"8df9c601-f464-4501-8418-d4abbbe22f6b","Type":"ContainerStarted","Data":"c084f8827ed09ff3ddd9a755a75bc31f49ed10002d139911ac29868f20a903bf"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.165751 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5" event={"ID":"8df9c601-f464-4501-8418-d4abbbe22f6b","Type":"ContainerStarted","Data":"24e9ae8c6dde5ee7c01ca83e638b6daab951c17d18861f9b7c55c05897b41664"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.165762 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5" event={"ID":"8df9c601-f464-4501-8418-d4abbbe22f6b","Type":"ContainerStarted","Data":"ab5b35b5f65f54651776f09fc5d854d3892193dfca7a337e524cebfaa152538f"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.167582 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qf4jk" event={"ID":"55adfb11-256b-4dd4-ba09-00ffd68f6e5e","Type":"ContainerStarted","Data":"a5d613c5c2510edd942c57f7f0b7829976a6a180058b8e94f0eec6dd79a5b2d7"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.172184 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf" event={"ID":"bbc58c2b-401d-4f85-b550-5bdaad4f7c8c","Type":"ContainerStarted","Data":"98716192e9248cb396d662ec8cefc8c428e11e6e158979e3367fd271186842c2"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.182861 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p5qm5" podStartSLOduration=125.182845274 podStartE2EDuration="2m5.182845274s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:17.182309808 +0000 UTC m=+145.802733891" watchObservedRunningTime="2026-01-27 09:56:17.182845274 +0000 UTC m=+145.803269357" Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.184760 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:17 crc kubenswrapper[4869]: E0127 09:56:17.184946 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:17.684923782 +0000 UTC m=+146.305347865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.185056 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:17 crc kubenswrapper[4869]: E0127 09:56:17.185441 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:17.685427376 +0000 UTC m=+146.305851469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.196601 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg" event={"ID":"8e262a14-a507-44b4-8634-5f4854181f02","Type":"ContainerStarted","Data":"98195474c21f6ae677bb13f2d6dcd5f93dc5d5d1eddbe34a6f46ff31e00f90ac"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.196677 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg" event={"ID":"8e262a14-a507-44b4-8634-5f4854181f02","Type":"ContainerStarted","Data":"4456a70a1725b0709ad3951ae8599576de5de8d4349c049f80fcc1eb04987924"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.200875 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn" event={"ID":"6a00c17d-c0fe-49a3-921a-2c19dcea3274","Type":"ContainerStarted","Data":"5312c79cebf648a44c96351e182693beef2a1780417df9e1cc9a5f3d9c663e17"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.201207 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn" Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.209906 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qf4jk" podStartSLOduration=126.209893621 podStartE2EDuration="2m6.209893621s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:17.209678801 +0000 UTC m=+145.830102884" watchObservedRunningTime="2026-01-27 09:56:17.209893621 +0000 UTC m=+145.830317704" Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.212486 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" event={"ID":"0421ca21-bf4e-4c89-9a3d-18a7603c1084","Type":"ContainerStarted","Data":"baa8d91080c3dd81e799a8296bbd0109af660eed2a92a6908f1f3a348bf03a20"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.220640 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" event={"ID":"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458","Type":"ContainerStarted","Data":"0ef1fb41828fc4a55da677c3c136db07c37cf75a51515492a867ec0537164840"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.222175 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhjsx" event={"ID":"4a39985a-ab91-430a-be02-8f2ac1399a37","Type":"ContainerStarted","Data":"5839c80686124a624f9722d19dec660601b6403093101f4484283ec3791ecd84"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.226039 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j9w4z" event={"ID":"2ecc898c-2377-4e6f-a02e-028eeca5eec8","Type":"ContainerStarted","Data":"37df2c0934175b14b3b640e428bd1581ad759a4e1fb17cc48977ccff9a8e05c4"} Jan 27 09:56:17 crc kubenswrapper[4869]: W0127 09:56:17.226551 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea00b67e_1ce1_40ce_be90_3f491e3c4ea9.slice/crio-426d75af804c6287b842ba73f30bff100276b1b31f63891fa55bb92870b895f6 WatchSource:0}: Error finding container 426d75af804c6287b842ba73f30bff100276b1b31f63891fa55bb92870b895f6: Status 404 returned error can't find the container with id 426d75af804c6287b842ba73f30bff100276b1b31f63891fa55bb92870b895f6 Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.228085 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dwt4c" event={"ID":"01c8f5c5-8c83-43b2-9070-6b138b246718","Type":"ContainerStarted","Data":"7980b3a15d5a627bd54a7a3614586e3684aaa5f112a9776959494cb2e833aafd"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.228111 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dwt4c" event={"ID":"01c8f5c5-8c83-43b2-9070-6b138b246718","Type":"ContainerStarted","Data":"eacfcaff2454a6383892d9ce257729a1df3de4849d8ba0e4fa6e752bc0d1dab1"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.231119 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59" event={"ID":"8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c","Type":"ContainerStarted","Data":"8f8323299a813130c95cbceb3099ef1c7288c18aaeaf5287cb32e2daabe68bf3"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.231160 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59" event={"ID":"8b83a5e9-9fbe-4404-8dd2-abb2ec6f6e1c","Type":"ContainerStarted","Data":"8b65a960f1d483d6ae2c55d33838faac208457d378e683d4cb2a678d35407a59"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.231529 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59" Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.243672 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn" podStartSLOduration=126.243653536 podStartE2EDuration="2m6.243653536s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:17.241141438 +0000 UTC m=+145.861565521" watchObservedRunningTime="2026-01-27 09:56:17.243653536 +0000 UTC m=+145.864077629" Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.248327 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" event={"ID":"670d8b6b-95a2-4711-98db-3f71e295093b","Type":"ContainerStarted","Data":"b212bccd9c0ae3e590a88a5af0cb0246ee3a211dc4022259fa41eed42255c8c2"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.259984 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" event={"ID":"c778877d-77ce-493e-a787-d0b76ff13a77","Type":"ContainerStarted","Data":"95dedc38c6e3c0d580b0d99db0967b116fb566f0d645c08a48ff6e4956ffd5f8"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.266426 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-bnbjg" podStartSLOduration=125.266405491 podStartE2EDuration="2m5.266405491s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:17.261613465 +0000 UTC m=+145.882037558" watchObservedRunningTime="2026-01-27 09:56:17.266405491 +0000 UTC m=+145.886829574" Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.274692 4869 generic.go:334] "Generic (PLEG): container finished" podID="b05e9e31-f26d-4358-a644-796cd3fea7a8" containerID="db3f3934ac4b293151fe71b98b7b0948b2b125480874642d6354fd7c189b2ccf" exitCode=0 Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.274757 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-w8hng" event={"ID":"b05e9e31-f26d-4358-a644-796cd3fea7a8","Type":"ContainerDied","Data":"db3f3934ac4b293151fe71b98b7b0948b2b125480874642d6354fd7c189b2ccf"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.279602 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" event={"ID":"58af825f-df23-4365-bf18-1b2a0c2d143f","Type":"ContainerStarted","Data":"76c1c2f43eca9526d346153168410b7e28f099440ae664cb21b9e67178af718e"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.282419 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" event={"ID":"fcb02167-7185-4960-a665-fca3f7d2c220","Type":"ContainerStarted","Data":"51923df05c6edc303b2ec157708d5fbb4b52f841c4760a28b15a4a13d3ed3e1e"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.285153 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59" podStartSLOduration=125.285140046 podStartE2EDuration="2m5.285140046s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:17.28417781 +0000 UTC m=+145.904601893" watchObservedRunningTime="2026-01-27 09:56:17.285140046 +0000 UTC m=+145.905564119" Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.287748 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:17 crc kubenswrapper[4869]: E0127 09:56:17.289026 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:17.789002389 +0000 UTC m=+146.409426482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.303125 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-smmkq" event={"ID":"21b1512e-af50-4bdd-8619-5bff9a4ce995","Type":"ContainerStarted","Data":"42e13d17200cdc27a93a65ca83bf4f8f7018cdd52a7796e37973292053d0ae92"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.303157 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-smmkq" event={"ID":"21b1512e-af50-4bdd-8619-5bff9a4ce995","Type":"ContainerStarted","Data":"1d7430a0532491641110f44bd18d350275c0bd949753b7dcd78380e03c3502e0"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.305140 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hrflq" event={"ID":"4c861742-2395-4de1-9cc3-1d8328741cbb","Type":"ContainerStarted","Data":"8ff09f1f76e39977db29bd41da2b0e2a044e18ad3c86b8fb43d4ea21f8a7598a"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.305164 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hrflq" event={"ID":"4c861742-2395-4de1-9cc3-1d8328741cbb","Type":"ContainerStarted","Data":"c160d152e1d2f5e73779039ae1e32721868fbf5c069d1b602cd3beeda6003fc4"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.305172 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hrflq" event={"ID":"4c861742-2395-4de1-9cc3-1d8328741cbb","Type":"ContainerStarted","Data":"17cf6a99f6d5e6bcf6274c71a792c517776ea572a812ee9a114f78128e3ea88a"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.311402 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6f8sx" event={"ID":"8b8af0be-d73b-4b8e-b7a2-295834553924","Type":"ContainerStarted","Data":"23301d7d538fa0e0ef0091f53f8b167c40da39f81e4097c5f80bfb3bcf3dc8f8"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.311431 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6f8sx" event={"ID":"8b8af0be-d73b-4b8e-b7a2-295834553924","Type":"ContainerStarted","Data":"9296e0fe25f8c66f68b952d25e2b3b4c62d915d1c26ec07de16c42e69bd40d97"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.314030 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" event={"ID":"4ab28893-3f63-4c8a-a023-e0447c39a817","Type":"ContainerStarted","Data":"7d3a440cf7e169021e3c93b04e4f7563e9ca83fc7d4846f58579f6f693b684af"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.314732 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.317731 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" event={"ID":"89c725c4-90e8-4965-b48d-89f3d2771faf","Type":"ContainerStarted","Data":"09d6c236c00d9bee5c98929b5d20eee77cddf282c9ddeac7a3f0027d22c5a5ad"} Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.321431 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-dpsrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.321479 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dpsrp" podUID="493c38dc-c859-4715-b97f-be1388ee2162" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.324858 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-clff8" podStartSLOduration=125.324828341 podStartE2EDuration="2m5.324828341s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:17.323134031 +0000 UTC m=+145.943558114" watchObservedRunningTime="2026-01-27 09:56:17.324828341 +0000 UTC m=+145.945252424" Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.331060 4869 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-9vjcv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.331122 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" podUID="4ab28893-3f63-4c8a-a023-e0447c39a817" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.333206 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gk8wd" Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.388945 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.389542 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" podStartSLOduration=125.389524317 podStartE2EDuration="2m5.389524317s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:17.379577417 +0000 UTC m=+146.000001500" watchObservedRunningTime="2026-01-27 09:56:17.389524317 +0000 UTC m=+146.009948400" Jan 27 09:56:17 crc kubenswrapper[4869]: E0127 09:56:17.392474 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:17.892460036 +0000 UTC m=+146.512884119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.407048 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-smmkq" podStartSLOduration=125.407032255 podStartE2EDuration="2m5.407032255s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:17.404751406 +0000 UTC m=+146.025175479" watchObservedRunningTime="2026-01-27 09:56:17.407032255 +0000 UTC m=+146.027456338" Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.440772 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.451411 4869 patch_prober.go:28] interesting pod/router-default-5444994796-ffwjx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 09:56:17 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 27 09:56:17 crc kubenswrapper[4869]: [+]process-running ok Jan 27 09:56:17 crc kubenswrapper[4869]: healthz check failed Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.451451 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffwjx" podUID="17abb21b-00f0-41dd-80a3-5d4cb9acc1e6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.490106 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.490905 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" podStartSLOduration=125.490888526 podStartE2EDuration="2m5.490888526s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:17.490081118 +0000 UTC m=+146.110505201" watchObservedRunningTime="2026-01-27 09:56:17.490888526 +0000 UTC m=+146.111312609" Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.491079 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-hrflq" podStartSLOduration=125.491075175 podStartE2EDuration="2m5.491075175s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:17.4384794 +0000 UTC m=+146.058903483" watchObservedRunningTime="2026-01-27 09:56:17.491075175 +0000 UTC m=+146.111499258" Jan 27 09:56:17 crc kubenswrapper[4869]: E0127 09:56:17.491589 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:17.991573528 +0000 UTC m=+146.611997611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.594242 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:17 crc kubenswrapper[4869]: E0127 09:56:17.595181 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:18.095166742 +0000 UTC m=+146.715590825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.695700 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:17 crc kubenswrapper[4869]: E0127 09:56:17.695994 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:18.195971124 +0000 UTC m=+146.816395207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.696136 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:17 crc kubenswrapper[4869]: E0127 09:56:17.696427 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:18.196418205 +0000 UTC m=+146.816842288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.797941 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:17 crc kubenswrapper[4869]: E0127 09:56:17.799287 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:18.298069918 +0000 UTC m=+146.918494011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.799478 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:17 crc kubenswrapper[4869]: E0127 09:56:17.800037 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:18.30002564 +0000 UTC m=+146.920449723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:17 crc kubenswrapper[4869]: I0127 09:56:17.901118 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:17 crc kubenswrapper[4869]: E0127 09:56:17.901456 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:18.401441971 +0000 UTC m=+147.021866054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.002307 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:18 crc kubenswrapper[4869]: E0127 09:56:18.002819 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:18.50280279 +0000 UTC m=+147.123226873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.103355 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:18 crc kubenswrapper[4869]: E0127 09:56:18.103474 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:18.603449645 +0000 UTC m=+147.223873728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.103670 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:18 crc kubenswrapper[4869]: E0127 09:56:18.103973 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:18.60396536 +0000 UTC m=+147.224389443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.204255 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:18 crc kubenswrapper[4869]: E0127 09:56:18.204622 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:18.704606154 +0000 UTC m=+147.325030247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.306035 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:18 crc kubenswrapper[4869]: E0127 09:56:18.306420 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:18.806401502 +0000 UTC m=+147.426825645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.328143 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" event={"ID":"fcb02167-7185-4960-a665-fca3f7d2c220","Type":"ContainerStarted","Data":"1d5ebc3d15e0205757c03349a8d692f156f202e9f5869ba6a219f0f9c4d06da9"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.331110 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" event={"ID":"4ab28893-3f63-4c8a-a023-e0447c39a817","Type":"ContainerStarted","Data":"90cf12bfd8d5c5cb176c298e56318a792fdbd5a902e9981c70666e11bb6e90b3"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.332037 4869 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-9vjcv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.332100 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" podUID="4ab28893-3f63-4c8a-a023-e0447c39a817" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.338956 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-849z4" event={"ID":"ea00b67e-1ce1-40ce-be90-3f491e3c4ea9","Type":"ContainerStarted","Data":"660a78d1e4b08f0c02141910a4d4ce038c1a7d70af5cd61b882245852e29bab4"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.338996 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-849z4" event={"ID":"ea00b67e-1ce1-40ce-be90-3f491e3c4ea9","Type":"ContainerStarted","Data":"426d75af804c6287b842ba73f30bff100276b1b31f63891fa55bb92870b895f6"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.342766 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q4j8x" event={"ID":"a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0","Type":"ContainerStarted","Data":"c2d18ab4e3adf4b3ac29b6ff0d59c77cd56e3ac5bd6bb974c4a2aae790b532d5"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.342814 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q4j8x" event={"ID":"a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0","Type":"ContainerStarted","Data":"1e756bfc2b3f96c4ed896b5a69961b2b5eb830b6fce89e0d816a46ee1a2b0772"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.342826 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q4j8x" event={"ID":"a7c3d5cf-ce3d-4b64-b685-fe70bcd252a0","Type":"ContainerStarted","Data":"886e762614ecb4c5c52ca23fbcec63a282e9700833ff28894444c70f71589fdb"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.342940 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-q4j8x" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.347313 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-frmph" event={"ID":"51446a4f-e443-47ee-9ca8-a67fdaf62a7e","Type":"ContainerStarted","Data":"53fb3c714555042ea672ec6a23dabbc6b1876c1d24661d5e6fa6c64ba55577b9"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.347356 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-frmph" event={"ID":"51446a4f-e443-47ee-9ca8-a67fdaf62a7e","Type":"ContainerStarted","Data":"5e63e5817abd5d2b824a7a2e95b078320bb68e1dc6669a3d25b990512d992954"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.353608 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" event={"ID":"ebc0fbc2-11a3-48a6-9442-81ffacb1516a","Type":"ContainerStarted","Data":"d968ad8e68fb08315d8789a915ca9b1ba37dde6a9632ab60572f801fdcb1060d"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.353676 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" event={"ID":"ebc0fbc2-11a3-48a6-9442-81ffacb1516a","Type":"ContainerStarted","Data":"b1cd90d73910662260b6e056cf30c5b2e313fbf3809461582dbd4e69c0f2229f"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.353688 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" event={"ID":"ebc0fbc2-11a3-48a6-9442-81ffacb1516a","Type":"ContainerStarted","Data":"7da8e90c6320d4dd876aeacf1e9208fc8598a2bf3cca748298dedcbb1fd75490"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.357139 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-prtqz" podStartSLOduration=126.357118248 podStartE2EDuration="2m6.357118248s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.352803175 +0000 UTC m=+146.973227258" watchObservedRunningTime="2026-01-27 09:56:18.357118248 +0000 UTC m=+146.977542332" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.360620 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" event={"ID":"a574e648-77e2-46a1-a2ad-af18e6e9ad57","Type":"ContainerStarted","Data":"c02354e1d8c82c0e29fc61cd13cb2b5a2b24887e5683dd787db90ef951e2a2d5"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.360684 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" event={"ID":"a574e648-77e2-46a1-a2ad-af18e6e9ad57","Type":"ContainerStarted","Data":"8412deb0808bc5033b570d687e3c461688c519c51fbb63f5529421b7755fcdaa"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.360740 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.363003 4869 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6kntj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.363171 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" podUID="a574e648-77e2-46a1-a2ad-af18e6e9ad57" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.365627 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" event={"ID":"58af825f-df23-4365-bf18-1b2a0c2d143f","Type":"ContainerStarted","Data":"8a01531dcff869dbcb78e5a219cff0efa8affe73fc1e351d2507e3b90a4aafa6"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.365674 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" event={"ID":"58af825f-df23-4365-bf18-1b2a0c2d143f","Type":"ContainerStarted","Data":"4059c046933532c8478fcd694091d361b50cf1a0551639ff245de54fcc610102"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.368788 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf" event={"ID":"bbc58c2b-401d-4f85-b550-5bdaad4f7c8c","Type":"ContainerStarted","Data":"931bc774673a3fc4fb5c70ed9582e4ec478444094a713302f13c7779c35c1118"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.369070 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.370099 4869 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xt5cf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.370144 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf" podUID="bbc58c2b-401d-4f85-b550-5bdaad4f7c8c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.371667 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" event={"ID":"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458","Type":"ContainerStarted","Data":"81f2de3c56348d49357a97adfae12fb106f9dc64fbef3355806b6feb19137646"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.374303 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-849z4" podStartSLOduration=7.37429047 podStartE2EDuration="7.37429047s" podCreationTimestamp="2026-01-27 09:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.371460636 +0000 UTC m=+146.991884719" watchObservedRunningTime="2026-01-27 09:56:18.37429047 +0000 UTC m=+146.994714553" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.377542 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" event={"ID":"c6b4b05c-6a93-4e36-810e-bb0da0a20d55","Type":"ContainerStarted","Data":"b2796cc2169006ea93f2a4afc487931feab0964fb62f7d3d384fb76023711821"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.377681 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" event={"ID":"c6b4b05c-6a93-4e36-810e-bb0da0a20d55","Type":"ContainerStarted","Data":"a78354ebd405f2f6855ebbfebc14f782883b64a942555e942506cb2fb113af09"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.391336 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" event={"ID":"c778877d-77ce-493e-a787-d0b76ff13a77","Type":"ContainerStarted","Data":"0ee257eaa5a1878f07db1490111d794570ebf2afecb3820c067ce0f957362e94"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.392069 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.394816 4869 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rpng5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.394965 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" podUID="c778877d-77ce-493e-a787-d0b76ff13a77" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.397117 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6f8sx" event={"ID":"8b8af0be-d73b-4b8e-b7a2-295834553924","Type":"ContainerStarted","Data":"b2c1800d3cd21a417aacc8b7ef9737ebd1c138ca2b6af844462a1a3a47ffb95e"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.406120 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktdt7" event={"ID":"c508e45a-d6fc-419c-960b-7603bf3209b2","Type":"ContainerStarted","Data":"9127ad7f0029a18eb1e4f89f5476e9aa06ed85f40218e0eb48261b6709e17a62"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.409432 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:18 crc kubenswrapper[4869]: E0127 09:56:18.410688 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:18.910672249 +0000 UTC m=+147.531096332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.419411 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j9w4z" event={"ID":"2ecc898c-2377-4e6f-a02e-028eeca5eec8","Type":"ContainerStarted","Data":"4ff582bbc1ab69cc32e390d80109efae90a4a415ebc174f6123669a518c3a381"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.425344 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2hgg7" event={"ID":"8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4","Type":"ContainerStarted","Data":"746f9dfe24a0afa25c5837abd15c77a29bd28120de313b55d2859e7542e89356"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.425395 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2hgg7" event={"ID":"8fa7631e-0a16-43b0-8ac3-dc06b6e9cbb4","Type":"ContainerStarted","Data":"0bca28457df82defdb8a33f2ce5f619e0459f7a1ccf63b26fecedfd46ca7a11d"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.430951 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-q4j8x" podStartSLOduration=7.430931756 podStartE2EDuration="7.430931756s" podCreationTimestamp="2026-01-27 09:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.402066882 +0000 UTC m=+147.022490965" watchObservedRunningTime="2026-01-27 09:56:18.430931756 +0000 UTC m=+147.051355839" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.432207 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-frmph" podStartSLOduration=126.432199835 podStartE2EDuration="2m6.432199835s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.429770691 +0000 UTC m=+147.050194784" watchObservedRunningTime="2026-01-27 09:56:18.432199835 +0000 UTC m=+147.052623918" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.437267 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhjsx" event={"ID":"4a39985a-ab91-430a-be02-8f2ac1399a37","Type":"ContainerStarted","Data":"7d1d10ab06346dd554ca60763abeb4dba820c429b6d5f97db2b87b5f5aae6d2b"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.445189 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2hgg7" podStartSLOduration=126.445174569 podStartE2EDuration="2m6.445174569s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.444432684 +0000 UTC m=+147.064856787" watchObservedRunningTime="2026-01-27 09:56:18.445174569 +0000 UTC m=+147.065598642" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.453206 4869 patch_prober.go:28] interesting pod/router-default-5444994796-ffwjx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 09:56:18 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 27 09:56:18 crc kubenswrapper[4869]: [+]process-running ok Jan 27 09:56:18 crc kubenswrapper[4869]: healthz check failed Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.453267 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffwjx" podUID="17abb21b-00f0-41dd-80a3-5d4cb9acc1e6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.457645 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-w8hng" event={"ID":"b05e9e31-f26d-4358-a644-796cd3fea7a8","Type":"ContainerStarted","Data":"2892a686803b968bb34a0c25c1adaa973a3a09e00faf09b66425207b438b8792"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.485166 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dwt4c" event={"ID":"01c8f5c5-8c83-43b2-9070-6b138b246718","Type":"ContainerStarted","Data":"afcdc51700d5c9d3a225b6e10457908ae88946a04d6a9fe175e1b69743ce7753"} Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.510539 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:18 crc kubenswrapper[4869]: E0127 09:56:18.511349 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.011319933 +0000 UTC m=+147.631744026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.520170 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j9w4z" podStartSLOduration=126.520145861 podStartE2EDuration="2m6.520145861s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.505005815 +0000 UTC m=+147.125430018" watchObservedRunningTime="2026-01-27 09:56:18.520145861 +0000 UTC m=+147.140569944" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.521022 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bn4v9" podStartSLOduration=126.521017502 podStartE2EDuration="2m6.521017502s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.484801581 +0000 UTC m=+147.105225664" watchObservedRunningTime="2026-01-27 09:56:18.521017502 +0000 UTC m=+147.141441585" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.562133 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.562487 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.579061 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" podStartSLOduration=127.579046943 podStartE2EDuration="2m7.579046943s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.57856403 +0000 UTC m=+147.198988113" watchObservedRunningTime="2026-01-27 09:56:18.579046943 +0000 UTC m=+147.199471026" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.620920 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-xkkb6" podStartSLOduration=126.62089172 podStartE2EDuration="2m6.62089172s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.618278537 +0000 UTC m=+147.238702640" watchObservedRunningTime="2026-01-27 09:56:18.62089172 +0000 UTC m=+147.241315803" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.621661 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:18 crc kubenswrapper[4869]: E0127 09:56:18.622946 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.122922556 +0000 UTC m=+147.743346699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.676960 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ktdt7" podStartSLOduration=126.676944748 podStartE2EDuration="2m6.676944748s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.653392105 +0000 UTC m=+147.273816188" watchObservedRunningTime="2026-01-27 09:56:18.676944748 +0000 UTC m=+147.297368821" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.677367 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6f8sx" podStartSLOduration=126.677363218 podStartE2EDuration="2m6.677363218s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.677162918 +0000 UTC m=+147.297587001" watchObservedRunningTime="2026-01-27 09:56:18.677363218 +0000 UTC m=+147.297787291" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.724657 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:18 crc kubenswrapper[4869]: E0127 09:56:18.725013 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.224997878 +0000 UTC m=+147.845421961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.765616 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" podStartSLOduration=126.765598606 podStartE2EDuration="2m6.765598606s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.735152368 +0000 UTC m=+147.355576451" watchObservedRunningTime="2026-01-27 09:56:18.765598606 +0000 UTC m=+147.386022689" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.789947 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vxds9" podStartSLOduration=126.789929406 podStartE2EDuration="2m6.789929406s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.784959501 +0000 UTC m=+147.405383594" watchObservedRunningTime="2026-01-27 09:56:18.789929406 +0000 UTC m=+147.410353489" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.792405 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf" podStartSLOduration=126.792387122 podStartE2EDuration="2m6.792387122s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.767032374 +0000 UTC m=+147.387456467" watchObservedRunningTime="2026-01-27 09:56:18.792387122 +0000 UTC m=+147.412811205" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.817728 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" podStartSLOduration=126.817712159 podStartE2EDuration="2m6.817712159s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.817037096 +0000 UTC m=+147.437461169" watchObservedRunningTime="2026-01-27 09:56:18.817712159 +0000 UTC m=+147.438136242" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.825522 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:18 crc kubenswrapper[4869]: E0127 09:56:18.825686 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.325658054 +0000 UTC m=+147.946082157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.825798 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:18 crc kubenswrapper[4869]: E0127 09:56:18.826280 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.326264183 +0000 UTC m=+147.946688266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.856038 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-w8hng" podStartSLOduration=126.856017768 podStartE2EDuration="2m6.856017768s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.85522228 +0000 UTC m=+147.475646363" watchObservedRunningTime="2026-01-27 09:56:18.856017768 +0000 UTC m=+147.476441851" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.873685 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dwt4c" podStartSLOduration=127.873668112 podStartE2EDuration="2m7.873668112s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.870631168 +0000 UTC m=+147.491055251" watchObservedRunningTime="2026-01-27 09:56:18.873668112 +0000 UTC m=+147.494092205" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.887980 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zhjsx" podStartSLOduration=126.887960437 podStartE2EDuration="2m6.887960437s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:18.886261566 +0000 UTC m=+147.506685649" watchObservedRunningTime="2026-01-27 09:56:18.887960437 +0000 UTC m=+147.508384520" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.902380 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.902450 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.903713 4869 patch_prober.go:28] interesting pod/apiserver-76f77b778f-w8hng container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.903749 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-w8hng" podUID="b05e9e31-f26d-4358-a644-796cd3fea7a8" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 27 09:56:18 crc kubenswrapper[4869]: I0127 09:56:18.927229 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:18 crc kubenswrapper[4869]: E0127 09:56:18.927532 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.427518505 +0000 UTC m=+148.047942588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.029099 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:19 crc kubenswrapper[4869]: E0127 09:56:19.029384 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.529369087 +0000 UTC m=+148.149793170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.130296 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:19 crc kubenswrapper[4869]: E0127 09:56:19.130508 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.630474644 +0000 UTC m=+148.250898727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.130693 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:19 crc kubenswrapper[4869]: E0127 09:56:19.131012 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.631000239 +0000 UTC m=+148.251424322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.231796 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:19 crc kubenswrapper[4869]: E0127 09:56:19.231975 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.731947288 +0000 UTC m=+148.352371381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.232053 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:19 crc kubenswrapper[4869]: E0127 09:56:19.232377 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.732366428 +0000 UTC m=+148.352790561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.243509 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.332906 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:19 crc kubenswrapper[4869]: E0127 09:56:19.333126 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.833094526 +0000 UTC m=+148.453518609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.333320 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:19 crc kubenswrapper[4869]: E0127 09:56:19.333622 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.83361341 +0000 UTC m=+148.454037483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.435292 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:19 crc kubenswrapper[4869]: E0127 09:56:19.435518 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.935482123 +0000 UTC m=+148.555906206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.435612 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:19 crc kubenswrapper[4869]: E0127 09:56:19.435968 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:19.935951045 +0000 UTC m=+148.556375128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.439930 4869 patch_prober.go:28] interesting pod/router-default-5444994796-ffwjx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 09:56:19 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 27 09:56:19 crc kubenswrapper[4869]: [+]process-running ok Jan 27 09:56:19 crc kubenswrapper[4869]: healthz check failed Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.439983 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffwjx" podUID="17abb21b-00f0-41dd-80a3-5d4cb9acc1e6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.491410 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-w8hng" event={"ID":"b05e9e31-f26d-4358-a644-796cd3fea7a8","Type":"ContainerStarted","Data":"d0fd909cc9501b3273bac86276c568a8172277248c04c0e813085d94ceb6d55e"} Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.493816 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" event={"ID":"0421ca21-bf4e-4c89-9a3d-18a7603c1084","Type":"ContainerStarted","Data":"881fb86a5c53827003da66d281eea80be0ef728e9852a45535efbde0b1be3939"} Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.493970 4869 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6kntj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.494011 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" podUID="a574e648-77e2-46a1-a2ad-af18e6e9ad57" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.505374 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-9vjcv" Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.507885 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r7z5l" Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.508456 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xt5cf" Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.536487 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:19 crc kubenswrapper[4869]: E0127 09:56:19.536719 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:20.036676344 +0000 UTC m=+148.657100457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.536864 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:19 crc kubenswrapper[4869]: E0127 09:56:19.537178 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:20.037162517 +0000 UTC m=+148.657586660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.638932 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:19 crc kubenswrapper[4869]: E0127 09:56:19.640748 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:20.14073258 +0000 UTC m=+148.761156663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.741546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:19 crc kubenswrapper[4869]: E0127 09:56:19.741972 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:20.241955792 +0000 UTC m=+148.862379875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.816948 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9rzwn" Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.842609 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.842890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.842918 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.842937 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:56:19 crc kubenswrapper[4869]: E0127 09:56:19.843615 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:20.343590873 +0000 UTC m=+148.964014956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.844309 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.850435 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.850636 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.944243 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:19 crc kubenswrapper[4869]: E0127 09:56:19.944878 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:20.444853727 +0000 UTC m=+149.065277810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.944997 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.952389 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.961103 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:56:19 crc kubenswrapper[4869]: I0127 09:56:19.974109 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.046417 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:20 crc kubenswrapper[4869]: E0127 09:56:20.047790 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:20.547744517 +0000 UTC m=+149.168168600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.150966 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:20 crc kubenswrapper[4869]: E0127 09:56:20.151300 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:20.651284799 +0000 UTC m=+149.271708882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.249627 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.251562 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:20 crc kubenswrapper[4869]: E0127 09:56:20.251670 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:20.751641101 +0000 UTC m=+149.372065184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.252146 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:20 crc kubenswrapper[4869]: E0127 09:56:20.252473 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:20.752461349 +0000 UTC m=+149.372885432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.356162 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:20 crc kubenswrapper[4869]: E0127 09:56:20.356496 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:20.856482413 +0000 UTC m=+149.476906496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.446083 4869 patch_prober.go:28] interesting pod/router-default-5444994796-ffwjx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 09:56:20 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 27 09:56:20 crc kubenswrapper[4869]: [+]process-running ok Jan 27 09:56:20 crc kubenswrapper[4869]: healthz check failed Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.446159 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffwjx" podUID="17abb21b-00f0-41dd-80a3-5d4cb9acc1e6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.459540 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:20 crc kubenswrapper[4869]: E0127 09:56:20.459920 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:20.959902259 +0000 UTC m=+149.580326342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.497941 4869 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rpng5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.497979 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" podUID="c778877d-77ce-493e-a787-d0b76ff13a77" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.538053 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b5df54847e17ffc20efd0ed2f9b484e41a499690e2302b49c6468ed93003ae4c"} Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.546182 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" event={"ID":"0421ca21-bf4e-4c89-9a3d-18a7603c1084","Type":"ContainerStarted","Data":"dbabbf62f568aaf027d00df3275eb4c700b8c69f53325b46bb383456fb182b40"} Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.565454 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:20 crc kubenswrapper[4869]: E0127 09:56:20.565861 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:21.065818622 +0000 UTC m=+149.686242706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:20 crc kubenswrapper[4869]: W0127 09:56:20.626586 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-cf803b42e8792b73225f84a1a966c8d2339acc2af44b011bb0cd8ba74d33fe44 WatchSource:0}: Error finding container cf803b42e8792b73225f84a1a966c8d2339acc2af44b011bb0cd8ba74d33fe44: Status 404 returned error can't find the container with id cf803b42e8792b73225f84a1a966c8d2339acc2af44b011bb0cd8ba74d33fe44 Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.667030 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:20 crc kubenswrapper[4869]: E0127 09:56:20.667956 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:21.167942338 +0000 UTC m=+149.788366421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.769985 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:20 crc kubenswrapper[4869]: E0127 09:56:20.770124 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:21.270098203 +0000 UTC m=+149.890522286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.770388 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:20 crc kubenswrapper[4869]: E0127 09:56:20.770717 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:21.270705192 +0000 UTC m=+149.891129275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.776286 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b8njf"] Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.777208 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8njf" Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.784718 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.792618 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b8njf"] Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.880806 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.881067 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95593b9c-39c7-40b7-aadc-4b8292206b30-catalog-content\") pod \"community-operators-b8njf\" (UID: \"95593b9c-39c7-40b7-aadc-4b8292206b30\") " pod="openshift-marketplace/community-operators-b8njf" Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.881153 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95593b9c-39c7-40b7-aadc-4b8292206b30-utilities\") pod \"community-operators-b8njf\" (UID: \"95593b9c-39c7-40b7-aadc-4b8292206b30\") " pod="openshift-marketplace/community-operators-b8njf" Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.881234 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp5tr\" (UniqueName: \"kubernetes.io/projected/95593b9c-39c7-40b7-aadc-4b8292206b30-kube-api-access-vp5tr\") pod \"community-operators-b8njf\" (UID: \"95593b9c-39c7-40b7-aadc-4b8292206b30\") " pod="openshift-marketplace/community-operators-b8njf" Jan 27 09:56:20 crc kubenswrapper[4869]: E0127 09:56:20.881354 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:21.381336189 +0000 UTC m=+150.001760282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.918435 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rpng5" Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.962957 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kgzqt"] Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.965351 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kgzqt" Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.969822 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.980502 4869 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.982640 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp5tr\" (UniqueName: \"kubernetes.io/projected/95593b9c-39c7-40b7-aadc-4b8292206b30-kube-api-access-vp5tr\") pod \"community-operators-b8njf\" (UID: \"95593b9c-39c7-40b7-aadc-4b8292206b30\") " pod="openshift-marketplace/community-operators-b8njf" Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.982742 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95593b9c-39c7-40b7-aadc-4b8292206b30-catalog-content\") pod \"community-operators-b8njf\" (UID: \"95593b9c-39c7-40b7-aadc-4b8292206b30\") " pod="openshift-marketplace/community-operators-b8njf" Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.983208 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95593b9c-39c7-40b7-aadc-4b8292206b30-catalog-content\") pod \"community-operators-b8njf\" (UID: \"95593b9c-39c7-40b7-aadc-4b8292206b30\") " pod="openshift-marketplace/community-operators-b8njf" Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.983274 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95593b9c-39c7-40b7-aadc-4b8292206b30-utilities\") pod \"community-operators-b8njf\" (UID: \"95593b9c-39c7-40b7-aadc-4b8292206b30\") " pod="openshift-marketplace/community-operators-b8njf" Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.983316 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:20 crc kubenswrapper[4869]: E0127 09:56:20.983617 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:21.48360202 +0000 UTC m=+150.104026113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.984086 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95593b9c-39c7-40b7-aadc-4b8292206b30-utilities\") pod \"community-operators-b8njf\" (UID: \"95593b9c-39c7-40b7-aadc-4b8292206b30\") " pod="openshift-marketplace/community-operators-b8njf" Jan 27 09:56:20 crc kubenswrapper[4869]: I0127 09:56:20.991551 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kgzqt"] Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.029375 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp5tr\" (UniqueName: \"kubernetes.io/projected/95593b9c-39c7-40b7-aadc-4b8292206b30-kube-api-access-vp5tr\") pod \"community-operators-b8njf\" (UID: \"95593b9c-39c7-40b7-aadc-4b8292206b30\") " pod="openshift-marketplace/community-operators-b8njf" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.084481 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:21 crc kubenswrapper[4869]: E0127 09:56:21.084790 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:21.584755399 +0000 UTC m=+150.205179482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.085044 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.085092 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75088e3e-820e-444a-b9d1-ed7be4c7bbad-catalog-content\") pod \"certified-operators-kgzqt\" (UID: \"75088e3e-820e-444a-b9d1-ed7be4c7bbad\") " pod="openshift-marketplace/certified-operators-kgzqt" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.085126 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75088e3e-820e-444a-b9d1-ed7be4c7bbad-utilities\") pod \"certified-operators-kgzqt\" (UID: \"75088e3e-820e-444a-b9d1-ed7be4c7bbad\") " pod="openshift-marketplace/certified-operators-kgzqt" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.085153 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6lpg\" (UniqueName: \"kubernetes.io/projected/75088e3e-820e-444a-b9d1-ed7be4c7bbad-kube-api-access-v6lpg\") pod \"certified-operators-kgzqt\" (UID: \"75088e3e-820e-444a-b9d1-ed7be4c7bbad\") " pod="openshift-marketplace/certified-operators-kgzqt" Jan 27 09:56:21 crc kubenswrapper[4869]: E0127 09:56:21.085454 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 09:56:21.585439131 +0000 UTC m=+150.205863204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jsrbp" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.104087 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8njf" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.153992 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bbhz7"] Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.154925 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbhz7" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.165792 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bbhz7"] Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.186279 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.186537 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75088e3e-820e-444a-b9d1-ed7be4c7bbad-catalog-content\") pod \"certified-operators-kgzqt\" (UID: \"75088e3e-820e-444a-b9d1-ed7be4c7bbad\") " pod="openshift-marketplace/certified-operators-kgzqt" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.186570 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75088e3e-820e-444a-b9d1-ed7be4c7bbad-utilities\") pod \"certified-operators-kgzqt\" (UID: \"75088e3e-820e-444a-b9d1-ed7be4c7bbad\") " pod="openshift-marketplace/certified-operators-kgzqt" Jan 27 09:56:21 crc kubenswrapper[4869]: E0127 09:56:21.186657 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 09:56:21.686623201 +0000 UTC m=+150.307047294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.186730 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6lpg\" (UniqueName: \"kubernetes.io/projected/75088e3e-820e-444a-b9d1-ed7be4c7bbad-kube-api-access-v6lpg\") pod \"certified-operators-kgzqt\" (UID: \"75088e3e-820e-444a-b9d1-ed7be4c7bbad\") " pod="openshift-marketplace/certified-operators-kgzqt" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.186943 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75088e3e-820e-444a-b9d1-ed7be4c7bbad-utilities\") pod \"certified-operators-kgzqt\" (UID: \"75088e3e-820e-444a-b9d1-ed7be4c7bbad\") " pod="openshift-marketplace/certified-operators-kgzqt" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.187160 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75088e3e-820e-444a-b9d1-ed7be4c7bbad-catalog-content\") pod \"certified-operators-kgzqt\" (UID: \"75088e3e-820e-444a-b9d1-ed7be4c7bbad\") " pod="openshift-marketplace/certified-operators-kgzqt" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.198050 4869 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T09:56:20.980533195Z","Handler":null,"Name":""} Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.204508 4869 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.204535 4869 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.211786 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6lpg\" (UniqueName: \"kubernetes.io/projected/75088e3e-820e-444a-b9d1-ed7be4c7bbad-kube-api-access-v6lpg\") pod \"certified-operators-kgzqt\" (UID: \"75088e3e-820e-444a-b9d1-ed7be4c7bbad\") " pod="openshift-marketplace/certified-operators-kgzqt" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.286790 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kgzqt" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.291493 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp5dn\" (UniqueName: \"kubernetes.io/projected/51d71dd0-a5ff-4891-8801-03d66bb6994c-kube-api-access-lp5dn\") pod \"community-operators-bbhz7\" (UID: \"51d71dd0-a5ff-4891-8801-03d66bb6994c\") " pod="openshift-marketplace/community-operators-bbhz7" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.291611 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.291675 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51d71dd0-a5ff-4891-8801-03d66bb6994c-catalog-content\") pod \"community-operators-bbhz7\" (UID: \"51d71dd0-a5ff-4891-8801-03d66bb6994c\") " pod="openshift-marketplace/community-operators-bbhz7" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.291730 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51d71dd0-a5ff-4891-8801-03d66bb6994c-utilities\") pod \"community-operators-bbhz7\" (UID: \"51d71dd0-a5ff-4891-8801-03d66bb6994c\") " pod="openshift-marketplace/community-operators-bbhz7" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.299134 4869 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.299195 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.352254 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b8njf"] Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.358488 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c6zf4"] Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.359609 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6zf4" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.369249 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6zf4"] Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.369821 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jsrbp\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.395689 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.395883 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe066c30-021e-4a80-8541-148eec52dde8-utilities\") pod \"certified-operators-c6zf4\" (UID: \"fe066c30-021e-4a80-8541-148eec52dde8\") " pod="openshift-marketplace/certified-operators-c6zf4" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.395919 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe066c30-021e-4a80-8541-148eec52dde8-catalog-content\") pod \"certified-operators-c6zf4\" (UID: \"fe066c30-021e-4a80-8541-148eec52dde8\") " pod="openshift-marketplace/certified-operators-c6zf4" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.395971 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51d71dd0-a5ff-4891-8801-03d66bb6994c-catalog-content\") pod \"community-operators-bbhz7\" (UID: \"51d71dd0-a5ff-4891-8801-03d66bb6994c\") " pod="openshift-marketplace/community-operators-bbhz7" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.396061 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv9p2\" (UniqueName: \"kubernetes.io/projected/fe066c30-021e-4a80-8541-148eec52dde8-kube-api-access-xv9p2\") pod \"certified-operators-c6zf4\" (UID: \"fe066c30-021e-4a80-8541-148eec52dde8\") " pod="openshift-marketplace/certified-operators-c6zf4" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.396101 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51d71dd0-a5ff-4891-8801-03d66bb6994c-utilities\") pod \"community-operators-bbhz7\" (UID: \"51d71dd0-a5ff-4891-8801-03d66bb6994c\") " pod="openshift-marketplace/community-operators-bbhz7" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.396169 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp5dn\" (UniqueName: \"kubernetes.io/projected/51d71dd0-a5ff-4891-8801-03d66bb6994c-kube-api-access-lp5dn\") pod \"community-operators-bbhz7\" (UID: \"51d71dd0-a5ff-4891-8801-03d66bb6994c\") " pod="openshift-marketplace/community-operators-bbhz7" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.396380 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51d71dd0-a5ff-4891-8801-03d66bb6994c-catalog-content\") pod \"community-operators-bbhz7\" (UID: \"51d71dd0-a5ff-4891-8801-03d66bb6994c\") " pod="openshift-marketplace/community-operators-bbhz7" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.396744 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51d71dd0-a5ff-4891-8801-03d66bb6994c-utilities\") pod \"community-operators-bbhz7\" (UID: \"51d71dd0-a5ff-4891-8801-03d66bb6994c\") " pod="openshift-marketplace/community-operators-bbhz7" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.419717 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp5dn\" (UniqueName: \"kubernetes.io/projected/51d71dd0-a5ff-4891-8801-03d66bb6994c-kube-api-access-lp5dn\") pod \"community-operators-bbhz7\" (UID: \"51d71dd0-a5ff-4891-8801-03d66bb6994c\") " pod="openshift-marketplace/community-operators-bbhz7" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.423223 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.447359 4869 patch_prober.go:28] interesting pod/router-default-5444994796-ffwjx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 09:56:21 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 27 09:56:21 crc kubenswrapper[4869]: [+]process-running ok Jan 27 09:56:21 crc kubenswrapper[4869]: healthz check failed Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.447417 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffwjx" podUID="17abb21b-00f0-41dd-80a3-5d4cb9acc1e6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.470203 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbhz7" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.497753 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe066c30-021e-4a80-8541-148eec52dde8-utilities\") pod \"certified-operators-c6zf4\" (UID: \"fe066c30-021e-4a80-8541-148eec52dde8\") " pod="openshift-marketplace/certified-operators-c6zf4" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.497808 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe066c30-021e-4a80-8541-148eec52dde8-catalog-content\") pod \"certified-operators-c6zf4\" (UID: \"fe066c30-021e-4a80-8541-148eec52dde8\") " pod="openshift-marketplace/certified-operators-c6zf4" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.497876 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv9p2\" (UniqueName: \"kubernetes.io/projected/fe066c30-021e-4a80-8541-148eec52dde8-kube-api-access-xv9p2\") pod \"certified-operators-c6zf4\" (UID: \"fe066c30-021e-4a80-8541-148eec52dde8\") " pod="openshift-marketplace/certified-operators-c6zf4" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.499313 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe066c30-021e-4a80-8541-148eec52dde8-utilities\") pod \"certified-operators-c6zf4\" (UID: \"fe066c30-021e-4a80-8541-148eec52dde8\") " pod="openshift-marketplace/certified-operators-c6zf4" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.499399 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe066c30-021e-4a80-8541-148eec52dde8-catalog-content\") pod \"certified-operators-c6zf4\" (UID: \"fe066c30-021e-4a80-8541-148eec52dde8\") " pod="openshift-marketplace/certified-operators-c6zf4" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.502226 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kgzqt"] Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.521675 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv9p2\" (UniqueName: \"kubernetes.io/projected/fe066c30-021e-4a80-8541-148eec52dde8-kube-api-access-xv9p2\") pod \"certified-operators-c6zf4\" (UID: \"fe066c30-021e-4a80-8541-148eec52dde8\") " pod="openshift-marketplace/certified-operators-c6zf4" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.553781 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4890f0df966809703c89404944ddb13bd88521d249c6acc0ce05083602d38769"} Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.554522 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.580993 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f5f9454d4e3b58935956ff512362caece2f7634b2119dfab3d47f6c0b8aaeb58"} Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.581040 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"cf803b42e8792b73225f84a1a966c8d2339acc2af44b011bb0cd8ba74d33fe44"} Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.595899 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgzqt" event={"ID":"75088e3e-820e-444a-b9d1-ed7be4c7bbad","Type":"ContainerStarted","Data":"734d4fc532ec51731078e6ad3b9fd0feb6bc87d6cc54b220c64c9565705e8078"} Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.602974 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" event={"ID":"0421ca21-bf4e-4c89-9a3d-18a7603c1084","Type":"ContainerStarted","Data":"e5fc4c210a50dad9aea6b9b6abc1041551859357ae65accbfe07a76241393cdf"} Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.603024 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" event={"ID":"0421ca21-bf4e-4c89-9a3d-18a7603c1084","Type":"ContainerStarted","Data":"266ea91a37135ca9070748acaee733f08b8b759c10406c0ffef8027915ad28f4"} Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.604774 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0faf5ac70d3517afffd4ec2d37e10a4cb6fb54aa46b14fd527e816f1d0ab8b63"} Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.604857 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6eabd8bed858c905cdc454c2c167af1d22d79174fda348308985b1b03c35e8d0"} Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.608578 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8njf" event={"ID":"95593b9c-39c7-40b7-aadc-4b8292206b30","Type":"ContainerStarted","Data":"cc80796aaf88d442f54255e84be329dd8caec807374fb61e456ad2b43f7f5aef"} Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.631259 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.653121 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-jcj5k" podStartSLOduration=10.653080017 podStartE2EDuration="10.653080017s" podCreationTimestamp="2026-01-27 09:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:21.6505993 +0000 UTC m=+150.271023393" watchObservedRunningTime="2026-01-27 09:56:21.653080017 +0000 UTC m=+150.273504100" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.709568 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6zf4" Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.903888 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jsrbp"] Jan 27 09:56:21 crc kubenswrapper[4869]: W0127 09:56:21.914210 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61352dfb_6006_4c3f_b404_b32f8a54c08d.slice/crio-b91aab27c87e1112ea98a0683657d8269cc0a4444b07b34d4d29701d44745813 WatchSource:0}: Error finding container b91aab27c87e1112ea98a0683657d8269cc0a4444b07b34d4d29701d44745813: Status 404 returned error can't find the container with id b91aab27c87e1112ea98a0683657d8269cc0a4444b07b34d4d29701d44745813 Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.956342 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6zf4"] Jan 27 09:56:21 crc kubenswrapper[4869]: W0127 09:56:21.972556 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe066c30_021e_4a80_8541_148eec52dde8.slice/crio-7db8e691e3fa02ea7a70ac3299d546afbf411dc0e8b8b956bc70b7585238fbfd WatchSource:0}: Error finding container 7db8e691e3fa02ea7a70ac3299d546afbf411dc0e8b8b956bc70b7585238fbfd: Status 404 returned error can't find the container with id 7db8e691e3fa02ea7a70ac3299d546afbf411dc0e8b8b956bc70b7585238fbfd Jan 27 09:56:21 crc kubenswrapper[4869]: I0127 09:56:21.974409 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bbhz7"] Jan 27 09:56:22 crc kubenswrapper[4869]: W0127 09:56:22.003302 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51d71dd0_a5ff_4891_8801_03d66bb6994c.slice/crio-f4123809ca2cbed556d104426c353cd2209b04bd6933921a7429c41e2b48002c WatchSource:0}: Error finding container f4123809ca2cbed556d104426c353cd2209b04bd6933921a7429c41e2b48002c: Status 404 returned error can't find the container with id f4123809ca2cbed556d104426c353cd2209b04bd6933921a7429c41e2b48002c Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.039615 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.439528 4869 patch_prober.go:28] interesting pod/router-default-5444994796-ffwjx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 09:56:22 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 27 09:56:22 crc kubenswrapper[4869]: [+]process-running ok Jan 27 09:56:22 crc kubenswrapper[4869]: healthz check failed Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.439886 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffwjx" podUID="17abb21b-00f0-41dd-80a3-5d4cb9acc1e6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.615414 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" event={"ID":"61352dfb-6006-4c3f-b404-b32f8a54c08d","Type":"ContainerStarted","Data":"d257b8a2ac9177f08d130f151f9799a355fa2fd1049395d02bfab94f141b8644"} Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.615476 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" event={"ID":"61352dfb-6006-4c3f-b404-b32f8a54c08d","Type":"ContainerStarted","Data":"b91aab27c87e1112ea98a0683657d8269cc0a4444b07b34d4d29701d44745813"} Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.615590 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.617515 4869 generic.go:334] "Generic (PLEG): container finished" podID="75088e3e-820e-444a-b9d1-ed7be4c7bbad" containerID="f73a5ed74beeda10d95f73b5d70d6ee501eb273a2472a151a130ad8a49c6466b" exitCode=0 Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.617580 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgzqt" event={"ID":"75088e3e-820e-444a-b9d1-ed7be4c7bbad","Type":"ContainerDied","Data":"f73a5ed74beeda10d95f73b5d70d6ee501eb273a2472a151a130ad8a49c6466b"} Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.619098 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.619125 4869 generic.go:334] "Generic (PLEG): container finished" podID="51d71dd0-a5ff-4891-8801-03d66bb6994c" containerID="e72a48c0c8382e428e43f1a66979abf896a2196eb31f724f2d82c40e96f17977" exitCode=0 Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.619171 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbhz7" event={"ID":"51d71dd0-a5ff-4891-8801-03d66bb6994c","Type":"ContainerDied","Data":"e72a48c0c8382e428e43f1a66979abf896a2196eb31f724f2d82c40e96f17977"} Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.619208 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbhz7" event={"ID":"51d71dd0-a5ff-4891-8801-03d66bb6994c","Type":"ContainerStarted","Data":"f4123809ca2cbed556d104426c353cd2209b04bd6933921a7429c41e2b48002c"} Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.622425 4869 generic.go:334] "Generic (PLEG): container finished" podID="95593b9c-39c7-40b7-aadc-4b8292206b30" containerID="cf87f3058e957a5b643bc532c799f1fa2f6a2e63a835f96f8dbdcb4564d4affd" exitCode=0 Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.622491 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8njf" event={"ID":"95593b9c-39c7-40b7-aadc-4b8292206b30","Type":"ContainerDied","Data":"cf87f3058e957a5b643bc532c799f1fa2f6a2e63a835f96f8dbdcb4564d4affd"} Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.624072 4869 generic.go:334] "Generic (PLEG): container finished" podID="fe066c30-021e-4a80-8541-148eec52dde8" containerID="4ee52253452b8908ce112d64ec0d2a22bc9d11b088d23eef3a62e1528945c906" exitCode=0 Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.624145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6zf4" event={"ID":"fe066c30-021e-4a80-8541-148eec52dde8","Type":"ContainerDied","Data":"4ee52253452b8908ce112d64ec0d2a22bc9d11b088d23eef3a62e1528945c906"} Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.624177 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6zf4" event={"ID":"fe066c30-021e-4a80-8541-148eec52dde8","Type":"ContainerStarted","Data":"7db8e691e3fa02ea7a70ac3299d546afbf411dc0e8b8b956bc70b7585238fbfd"} Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.656908 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" podStartSLOduration=130.656886329 podStartE2EDuration="2m10.656886329s" podCreationTimestamp="2026-01-27 09:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:22.656603006 +0000 UTC m=+151.277027089" watchObservedRunningTime="2026-01-27 09:56:22.656886329 +0000 UTC m=+151.277310412" Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.775520 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qz25t"] Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.776561 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qz25t" Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.783578 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.843038 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qz25t"] Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.919468 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb3e386-81b3-48d9-ba20-8a27ea09d026-utilities\") pod \"redhat-marketplace-qz25t\" (UID: \"deb3e386-81b3-48d9-ba20-8a27ea09d026\") " pod="openshift-marketplace/redhat-marketplace-qz25t" Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.919544 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw7c9\" (UniqueName: \"kubernetes.io/projected/deb3e386-81b3-48d9-ba20-8a27ea09d026-kube-api-access-lw7c9\") pod \"redhat-marketplace-qz25t\" (UID: \"deb3e386-81b3-48d9-ba20-8a27ea09d026\") " pod="openshift-marketplace/redhat-marketplace-qz25t" Jan 27 09:56:22 crc kubenswrapper[4869]: I0127 09:56:22.919648 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb3e386-81b3-48d9-ba20-8a27ea09d026-catalog-content\") pod \"redhat-marketplace-qz25t\" (UID: \"deb3e386-81b3-48d9-ba20-8a27ea09d026\") " pod="openshift-marketplace/redhat-marketplace-qz25t" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.009764 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.010958 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.013361 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.013582 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.020535 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw7c9\" (UniqueName: \"kubernetes.io/projected/deb3e386-81b3-48d9-ba20-8a27ea09d026-kube-api-access-lw7c9\") pod \"redhat-marketplace-qz25t\" (UID: \"deb3e386-81b3-48d9-ba20-8a27ea09d026\") " pod="openshift-marketplace/redhat-marketplace-qz25t" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.020668 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb3e386-81b3-48d9-ba20-8a27ea09d026-catalog-content\") pod \"redhat-marketplace-qz25t\" (UID: \"deb3e386-81b3-48d9-ba20-8a27ea09d026\") " pod="openshift-marketplace/redhat-marketplace-qz25t" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.020699 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb3e386-81b3-48d9-ba20-8a27ea09d026-utilities\") pod \"redhat-marketplace-qz25t\" (UID: \"deb3e386-81b3-48d9-ba20-8a27ea09d026\") " pod="openshift-marketplace/redhat-marketplace-qz25t" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.021798 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb3e386-81b3-48d9-ba20-8a27ea09d026-catalog-content\") pod \"redhat-marketplace-qz25t\" (UID: \"deb3e386-81b3-48d9-ba20-8a27ea09d026\") " pod="openshift-marketplace/redhat-marketplace-qz25t" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.023944 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb3e386-81b3-48d9-ba20-8a27ea09d026-utilities\") pod \"redhat-marketplace-qz25t\" (UID: \"deb3e386-81b3-48d9-ba20-8a27ea09d026\") " pod="openshift-marketplace/redhat-marketplace-qz25t" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.027377 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.072862 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw7c9\" (UniqueName: \"kubernetes.io/projected/deb3e386-81b3-48d9-ba20-8a27ea09d026-kube-api-access-lw7c9\") pod \"redhat-marketplace-qz25t\" (UID: \"deb3e386-81b3-48d9-ba20-8a27ea09d026\") " pod="openshift-marketplace/redhat-marketplace-qz25t" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.089540 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qz25t" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.121353 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/603a2845-c212-4aed-9faa-8e691d4229b9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"603a2845-c212-4aed-9faa-8e691d4229b9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.121399 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/603a2845-c212-4aed-9faa-8e691d4229b9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"603a2845-c212-4aed-9faa-8e691d4229b9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.158754 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mbnv9"] Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.159925 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mbnv9" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.174059 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbnv9"] Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.225413 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/603a2845-c212-4aed-9faa-8e691d4229b9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"603a2845-c212-4aed-9faa-8e691d4229b9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.225455 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/603a2845-c212-4aed-9faa-8e691d4229b9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"603a2845-c212-4aed-9faa-8e691d4229b9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.225611 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/603a2845-c212-4aed-9faa-8e691d4229b9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"603a2845-c212-4aed-9faa-8e691d4229b9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.245031 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/603a2845-c212-4aed-9faa-8e691d4229b9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"603a2845-c212-4aed-9faa-8e691d4229b9\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.300495 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qz25t"] Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.326587 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d235ac0-6891-411b-8d02-2333775dcb9a-catalog-content\") pod \"redhat-marketplace-mbnv9\" (UID: \"7d235ac0-6891-411b-8d02-2333775dcb9a\") " pod="openshift-marketplace/redhat-marketplace-mbnv9" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.326653 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d235ac0-6891-411b-8d02-2333775dcb9a-utilities\") pod \"redhat-marketplace-mbnv9\" (UID: \"7d235ac0-6891-411b-8d02-2333775dcb9a\") " pod="openshift-marketplace/redhat-marketplace-mbnv9" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.326737 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lfzk\" (UniqueName: \"kubernetes.io/projected/7d235ac0-6891-411b-8d02-2333775dcb9a-kube-api-access-9lfzk\") pod \"redhat-marketplace-mbnv9\" (UID: \"7d235ac0-6891-411b-8d02-2333775dcb9a\") " pod="openshift-marketplace/redhat-marketplace-mbnv9" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.328352 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.428555 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d235ac0-6891-411b-8d02-2333775dcb9a-catalog-content\") pod \"redhat-marketplace-mbnv9\" (UID: \"7d235ac0-6891-411b-8d02-2333775dcb9a\") " pod="openshift-marketplace/redhat-marketplace-mbnv9" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.428641 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d235ac0-6891-411b-8d02-2333775dcb9a-utilities\") pod \"redhat-marketplace-mbnv9\" (UID: \"7d235ac0-6891-411b-8d02-2333775dcb9a\") " pod="openshift-marketplace/redhat-marketplace-mbnv9" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.429889 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d235ac0-6891-411b-8d02-2333775dcb9a-utilities\") pod \"redhat-marketplace-mbnv9\" (UID: \"7d235ac0-6891-411b-8d02-2333775dcb9a\") " pod="openshift-marketplace/redhat-marketplace-mbnv9" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.430088 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lfzk\" (UniqueName: \"kubernetes.io/projected/7d235ac0-6891-411b-8d02-2333775dcb9a-kube-api-access-9lfzk\") pod \"redhat-marketplace-mbnv9\" (UID: \"7d235ac0-6891-411b-8d02-2333775dcb9a\") " pod="openshift-marketplace/redhat-marketplace-mbnv9" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.432733 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d235ac0-6891-411b-8d02-2333775dcb9a-catalog-content\") pod \"redhat-marketplace-mbnv9\" (UID: \"7d235ac0-6891-411b-8d02-2333775dcb9a\") " pod="openshift-marketplace/redhat-marketplace-mbnv9" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.440502 4869 patch_prober.go:28] interesting pod/router-default-5444994796-ffwjx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 09:56:23 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 27 09:56:23 crc kubenswrapper[4869]: [+]process-running ok Jan 27 09:56:23 crc kubenswrapper[4869]: healthz check failed Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.440557 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffwjx" podUID="17abb21b-00f0-41dd-80a3-5d4cb9acc1e6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.460796 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lfzk\" (UniqueName: \"kubernetes.io/projected/7d235ac0-6891-411b-8d02-2333775dcb9a-kube-api-access-9lfzk\") pod \"redhat-marketplace-mbnv9\" (UID: \"7d235ac0-6891-411b-8d02-2333775dcb9a\") " pod="openshift-marketplace/redhat-marketplace-mbnv9" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.487699 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mbnv9" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.504768 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.504812 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.507203 4869 patch_prober.go:28] interesting pod/console-f9d7485db-q86c4 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.507263 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-q86c4" podUID="b6851779-1393-4518-be8b-519296708bd7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.636608 4869 generic.go:334] "Generic (PLEG): container finished" podID="3a2ec119-d8f3-4edb-aa2f-d4ffd3617458" containerID="81f2de3c56348d49357a97adfae12fb106f9dc64fbef3355806b6feb19137646" exitCode=0 Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.636685 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" event={"ID":"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458","Type":"ContainerDied","Data":"81f2de3c56348d49357a97adfae12fb106f9dc64fbef3355806b6feb19137646"} Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.639101 4869 generic.go:334] "Generic (PLEG): container finished" podID="deb3e386-81b3-48d9-ba20-8a27ea09d026" containerID="9cf620e7843111d4d81d7a25ae99ec547626736658dac79385c3275ab9ce7309" exitCode=0 Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.639192 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz25t" event={"ID":"deb3e386-81b3-48d9-ba20-8a27ea09d026","Type":"ContainerDied","Data":"9cf620e7843111d4d81d7a25ae99ec547626736658dac79385c3275ab9ce7309"} Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.639240 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz25t" event={"ID":"deb3e386-81b3-48d9-ba20-8a27ea09d026","Type":"ContainerStarted","Data":"41e3390683c65e22040730d8a629d7e316ccfa8a540212031d3af8041b2806b0"} Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.747478 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 09:56:23 crc kubenswrapper[4869]: W0127 09:56:23.776951 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod603a2845_c212_4aed_9faa_8e691d4229b9.slice/crio-aae43f7b3f5cb5dedba0862313e4eb1c0a11eb51b7034a60a3ea9a4b55ffecb4 WatchSource:0}: Error finding container aae43f7b3f5cb5dedba0862313e4eb1c0a11eb51b7034a60a3ea9a4b55ffecb4: Status 404 returned error can't find the container with id aae43f7b3f5cb5dedba0862313e4eb1c0a11eb51b7034a60a3ea9a4b55ffecb4 Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.792482 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-dpsrp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.792526 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-dpsrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.792536 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dpsrp" podUID="493c38dc-c859-4715-b97f-be1388ee2162" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.792582 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dpsrp" podUID="493c38dc-c859-4715-b97f-be1388ee2162" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.960731 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.973378 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-w8hng" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.982593 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lb57z"] Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.983655 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb57z" Jan 27 09:56:23 crc kubenswrapper[4869]: I0127 09:56:23.986445 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.000875 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lb57z"] Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.067157 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbnv9"] Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.165624 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79eecc44-f04a-43b0-ae75-84843aa45574-utilities\") pod \"redhat-operators-lb57z\" (UID: \"79eecc44-f04a-43b0-ae75-84843aa45574\") " pod="openshift-marketplace/redhat-operators-lb57z" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.167577 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79eecc44-f04a-43b0-ae75-84843aa45574-catalog-content\") pod \"redhat-operators-lb57z\" (UID: \"79eecc44-f04a-43b0-ae75-84843aa45574\") " pod="openshift-marketplace/redhat-operators-lb57z" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.167719 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qntpd\" (UniqueName: \"kubernetes.io/projected/79eecc44-f04a-43b0-ae75-84843aa45574-kube-api-access-qntpd\") pod \"redhat-operators-lb57z\" (UID: \"79eecc44-f04a-43b0-ae75-84843aa45574\") " pod="openshift-marketplace/redhat-operators-lb57z" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.269152 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79eecc44-f04a-43b0-ae75-84843aa45574-utilities\") pod \"redhat-operators-lb57z\" (UID: \"79eecc44-f04a-43b0-ae75-84843aa45574\") " pod="openshift-marketplace/redhat-operators-lb57z" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.269213 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79eecc44-f04a-43b0-ae75-84843aa45574-catalog-content\") pod \"redhat-operators-lb57z\" (UID: \"79eecc44-f04a-43b0-ae75-84843aa45574\") " pod="openshift-marketplace/redhat-operators-lb57z" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.269246 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qntpd\" (UniqueName: \"kubernetes.io/projected/79eecc44-f04a-43b0-ae75-84843aa45574-kube-api-access-qntpd\") pod \"redhat-operators-lb57z\" (UID: \"79eecc44-f04a-43b0-ae75-84843aa45574\") " pod="openshift-marketplace/redhat-operators-lb57z" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.269894 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79eecc44-f04a-43b0-ae75-84843aa45574-utilities\") pod \"redhat-operators-lb57z\" (UID: \"79eecc44-f04a-43b0-ae75-84843aa45574\") " pod="openshift-marketplace/redhat-operators-lb57z" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.270294 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79eecc44-f04a-43b0-ae75-84843aa45574-catalog-content\") pod \"redhat-operators-lb57z\" (UID: \"79eecc44-f04a-43b0-ae75-84843aa45574\") " pod="openshift-marketplace/redhat-operators-lb57z" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.294647 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qntpd\" (UniqueName: \"kubernetes.io/projected/79eecc44-f04a-43b0-ae75-84843aa45574-kube-api-access-qntpd\") pod \"redhat-operators-lb57z\" (UID: \"79eecc44-f04a-43b0-ae75-84843aa45574\") " pod="openshift-marketplace/redhat-operators-lb57z" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.322090 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb57z" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.367742 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-chwp4"] Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.369308 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chwp4" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.370260 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2c5b6e-1f12-4906-b2f8-303354595a04-utilities\") pod \"redhat-operators-chwp4\" (UID: \"3e2c5b6e-1f12-4906-b2f8-303354595a04\") " pod="openshift-marketplace/redhat-operators-chwp4" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.370290 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbvh7\" (UniqueName: \"kubernetes.io/projected/3e2c5b6e-1f12-4906-b2f8-303354595a04-kube-api-access-dbvh7\") pod \"redhat-operators-chwp4\" (UID: \"3e2c5b6e-1f12-4906-b2f8-303354595a04\") " pod="openshift-marketplace/redhat-operators-chwp4" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.370336 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2c5b6e-1f12-4906-b2f8-303354595a04-catalog-content\") pod \"redhat-operators-chwp4\" (UID: \"3e2c5b6e-1f12-4906-b2f8-303354595a04\") " pod="openshift-marketplace/redhat-operators-chwp4" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.380871 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chwp4"] Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.436422 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.439372 4869 patch_prober.go:28] interesting pod/router-default-5444994796-ffwjx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 09:56:24 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 27 09:56:24 crc kubenswrapper[4869]: [+]process-running ok Jan 27 09:56:24 crc kubenswrapper[4869]: healthz check failed Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.439440 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffwjx" podUID="17abb21b-00f0-41dd-80a3-5d4cb9acc1e6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.471099 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2c5b6e-1f12-4906-b2f8-303354595a04-utilities\") pod \"redhat-operators-chwp4\" (UID: \"3e2c5b6e-1f12-4906-b2f8-303354595a04\") " pod="openshift-marketplace/redhat-operators-chwp4" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.471150 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbvh7\" (UniqueName: \"kubernetes.io/projected/3e2c5b6e-1f12-4906-b2f8-303354595a04-kube-api-access-dbvh7\") pod \"redhat-operators-chwp4\" (UID: \"3e2c5b6e-1f12-4906-b2f8-303354595a04\") " pod="openshift-marketplace/redhat-operators-chwp4" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.471237 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2c5b6e-1f12-4906-b2f8-303354595a04-catalog-content\") pod \"redhat-operators-chwp4\" (UID: \"3e2c5b6e-1f12-4906-b2f8-303354595a04\") " pod="openshift-marketplace/redhat-operators-chwp4" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.471649 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2c5b6e-1f12-4906-b2f8-303354595a04-catalog-content\") pod \"redhat-operators-chwp4\" (UID: \"3e2c5b6e-1f12-4906-b2f8-303354595a04\") " pod="openshift-marketplace/redhat-operators-chwp4" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.471899 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2c5b6e-1f12-4906-b2f8-303354595a04-utilities\") pod \"redhat-operators-chwp4\" (UID: \"3e2c5b6e-1f12-4906-b2f8-303354595a04\") " pod="openshift-marketplace/redhat-operators-chwp4" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.491767 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbvh7\" (UniqueName: \"kubernetes.io/projected/3e2c5b6e-1f12-4906-b2f8-303354595a04-kube-api-access-dbvh7\") pod \"redhat-operators-chwp4\" (UID: \"3e2c5b6e-1f12-4906-b2f8-303354595a04\") " pod="openshift-marketplace/redhat-operators-chwp4" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.558121 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.660279 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"603a2845-c212-4aed-9faa-8e691d4229b9","Type":"ContainerStarted","Data":"c38537f6bde83f08ee9efc8b405c6ba2a9e837a04be98e4d5d9b188365b91b68"} Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.660562 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"603a2845-c212-4aed-9faa-8e691d4229b9","Type":"ContainerStarted","Data":"aae43f7b3f5cb5dedba0862313e4eb1c0a11eb51b7034a60a3ea9a4b55ffecb4"} Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.688157 4869 generic.go:334] "Generic (PLEG): container finished" podID="7d235ac0-6891-411b-8d02-2333775dcb9a" containerID="f627098da275ce2c39b6d0223acced9245e5eb5b3896bb09f27c547f28413895" exitCode=0 Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.688968 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbnv9" event={"ID":"7d235ac0-6891-411b-8d02-2333775dcb9a","Type":"ContainerDied","Data":"f627098da275ce2c39b6d0223acced9245e5eb5b3896bb09f27c547f28413895"} Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.688994 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbnv9" event={"ID":"7d235ac0-6891-411b-8d02-2333775dcb9a","Type":"ContainerStarted","Data":"2062a08c690dd093b42c5cef6f83e262fc27de48dd051156ccd65f009cf7d95c"} Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.691058 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chwp4" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.705993 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.705974603 podStartE2EDuration="2.705974603s" podCreationTimestamp="2026-01-27 09:56:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:24.705239609 +0000 UTC m=+153.325663692" watchObservedRunningTime="2026-01-27 09:56:24.705974603 +0000 UTC m=+153.326398696" Jan 27 09:56:24 crc kubenswrapper[4869]: I0127 09:56:24.903026 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lb57z"] Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.278797 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-chwp4"] Jan 27 09:56:25 crc kubenswrapper[4869]: W0127 09:56:25.286995 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e2c5b6e_1f12_4906_b2f8_303354595a04.slice/crio-e1591e613e1693fd032304c9de1dbee952c4ef2304bc0186b4571580e0dcf0c7 WatchSource:0}: Error finding container e1591e613e1693fd032304c9de1dbee952c4ef2304bc0186b4571580e0dcf0c7: Status 404 returned error can't find the container with id e1591e613e1693fd032304c9de1dbee952c4ef2304bc0186b4571580e0dcf0c7 Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.321465 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.446814 4869 patch_prober.go:28] interesting pod/router-default-5444994796-ffwjx container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 09:56:25 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 27 09:56:25 crc kubenswrapper[4869]: [+]process-running ok Jan 27 09:56:25 crc kubenswrapper[4869]: healthz check failed Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.446897 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ffwjx" podUID="17abb21b-00f0-41dd-80a3-5d4cb9acc1e6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.503851 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-secret-volume\") pod \"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458\" (UID: \"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458\") " Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.503970 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-config-volume\") pod \"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458\" (UID: \"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458\") " Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.504025 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q77tf\" (UniqueName: \"kubernetes.io/projected/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-kube-api-access-q77tf\") pod \"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458\" (UID: \"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458\") " Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.504651 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-config-volume" (OuterVolumeSpecName: "config-volume") pod "3a2ec119-d8f3-4edb-aa2f-d4ffd3617458" (UID: "3a2ec119-d8f3-4edb-aa2f-d4ffd3617458"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.517949 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-kube-api-access-q77tf" (OuterVolumeSpecName: "kube-api-access-q77tf") pod "3a2ec119-d8f3-4edb-aa2f-d4ffd3617458" (UID: "3a2ec119-d8f3-4edb-aa2f-d4ffd3617458"). InnerVolumeSpecName "kube-api-access-q77tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.524276 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3a2ec119-d8f3-4edb-aa2f-d4ffd3617458" (UID: "3a2ec119-d8f3-4edb-aa2f-d4ffd3617458"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.605538 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.605586 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q77tf\" (UniqueName: \"kubernetes.io/projected/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-kube-api-access-q77tf\") on node \"crc\" DevicePath \"\"" Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.605601 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.706754 4869 generic.go:334] "Generic (PLEG): container finished" podID="603a2845-c212-4aed-9faa-8e691d4229b9" containerID="c38537f6bde83f08ee9efc8b405c6ba2a9e837a04be98e4d5d9b188365b91b68" exitCode=0 Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.706808 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"603a2845-c212-4aed-9faa-8e691d4229b9","Type":"ContainerDied","Data":"c38537f6bde83f08ee9efc8b405c6ba2a9e837a04be98e4d5d9b188365b91b68"} Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.712429 4869 generic.go:334] "Generic (PLEG): container finished" podID="3e2c5b6e-1f12-4906-b2f8-303354595a04" containerID="bbf3dd7a6acabd0165f0cbad7ef22c80079d3c37281b92908fa2ea2f2f7a8e71" exitCode=0 Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.712517 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chwp4" event={"ID":"3e2c5b6e-1f12-4906-b2f8-303354595a04","Type":"ContainerDied","Data":"bbf3dd7a6acabd0165f0cbad7ef22c80079d3c37281b92908fa2ea2f2f7a8e71"} Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.712551 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chwp4" event={"ID":"3e2c5b6e-1f12-4906-b2f8-303354595a04","Type":"ContainerStarted","Data":"e1591e613e1693fd032304c9de1dbee952c4ef2304bc0186b4571580e0dcf0c7"} Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.717003 4869 generic.go:334] "Generic (PLEG): container finished" podID="79eecc44-f04a-43b0-ae75-84843aa45574" containerID="b362246e5b26f5a3c352101a924ba895780d601834dad5eaa105b9c82f27a1fb" exitCode=0 Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.717098 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb57z" event={"ID":"79eecc44-f04a-43b0-ae75-84843aa45574","Type":"ContainerDied","Data":"b362246e5b26f5a3c352101a924ba895780d601834dad5eaa105b9c82f27a1fb"} Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.717132 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb57z" event={"ID":"79eecc44-f04a-43b0-ae75-84843aa45574","Type":"ContainerStarted","Data":"bbd83b88950964fa96a6f63360541f10b1ee03b3113f1bfcdbfa7ad1339229fa"} Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.722233 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" event={"ID":"3a2ec119-d8f3-4edb-aa2f-d4ffd3617458","Type":"ContainerDied","Data":"0ef1fb41828fc4a55da677c3c136db07c37cf75a51515492a867ec0537164840"} Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.722268 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ef1fb41828fc4a55da677c3c136db07c37cf75a51515492a867ec0537164840" Jan 27 09:56:25 crc kubenswrapper[4869]: I0127 09:56:25.722317 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z" Jan 27 09:56:26 crc kubenswrapper[4869]: I0127 09:56:26.370525 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 09:56:26 crc kubenswrapper[4869]: E0127 09:56:26.370872 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a2ec119-d8f3-4edb-aa2f-d4ffd3617458" containerName="collect-profiles" Jan 27 09:56:26 crc kubenswrapper[4869]: I0127 09:56:26.370887 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a2ec119-d8f3-4edb-aa2f-d4ffd3617458" containerName="collect-profiles" Jan 27 09:56:26 crc kubenswrapper[4869]: I0127 09:56:26.371037 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a2ec119-d8f3-4edb-aa2f-d4ffd3617458" containerName="collect-profiles" Jan 27 09:56:26 crc kubenswrapper[4869]: I0127 09:56:26.371476 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 09:56:26 crc kubenswrapper[4869]: I0127 09:56:26.373285 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 09:56:26 crc kubenswrapper[4869]: I0127 09:56:26.407954 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 09:56:26 crc kubenswrapper[4869]: I0127 09:56:26.410955 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 09:56:26 crc kubenswrapper[4869]: I0127 09:56:26.440336 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:26 crc kubenswrapper[4869]: I0127 09:56:26.442745 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-ffwjx" Jan 27 09:56:26 crc kubenswrapper[4869]: I0127 09:56:26.523923 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24a183f2-7b22-456b-84cd-2f68c1760127-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"24a183f2-7b22-456b-84cd-2f68c1760127\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 09:56:26 crc kubenswrapper[4869]: I0127 09:56:26.523967 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24a183f2-7b22-456b-84cd-2f68c1760127-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"24a183f2-7b22-456b-84cd-2f68c1760127\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 09:56:26 crc kubenswrapper[4869]: I0127 09:56:26.625955 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24a183f2-7b22-456b-84cd-2f68c1760127-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"24a183f2-7b22-456b-84cd-2f68c1760127\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 09:56:26 crc kubenswrapper[4869]: I0127 09:56:26.626341 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24a183f2-7b22-456b-84cd-2f68c1760127-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"24a183f2-7b22-456b-84cd-2f68c1760127\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 09:56:26 crc kubenswrapper[4869]: I0127 09:56:26.626403 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24a183f2-7b22-456b-84cd-2f68c1760127-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"24a183f2-7b22-456b-84cd-2f68c1760127\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 09:56:26 crc kubenswrapper[4869]: I0127 09:56:26.643216 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24a183f2-7b22-456b-84cd-2f68c1760127-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"24a183f2-7b22-456b-84cd-2f68c1760127\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 09:56:26 crc kubenswrapper[4869]: I0127 09:56:26.726969 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 09:56:27 crc kubenswrapper[4869]: I0127 09:56:27.129989 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 09:56:27 crc kubenswrapper[4869]: I0127 09:56:27.198246 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 09:56:27 crc kubenswrapper[4869]: W0127 09:56:27.219933 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod24a183f2_7b22_456b_84cd_2f68c1760127.slice/crio-b4454382960f41305bf2bce357455ad2cafeba5c8b096a2d0136a5a05f79526c WatchSource:0}: Error finding container b4454382960f41305bf2bce357455ad2cafeba5c8b096a2d0136a5a05f79526c: Status 404 returned error can't find the container with id b4454382960f41305bf2bce357455ad2cafeba5c8b096a2d0136a5a05f79526c Jan 27 09:56:27 crc kubenswrapper[4869]: I0127 09:56:27.234252 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/603a2845-c212-4aed-9faa-8e691d4229b9-kube-api-access\") pod \"603a2845-c212-4aed-9faa-8e691d4229b9\" (UID: \"603a2845-c212-4aed-9faa-8e691d4229b9\") " Jan 27 09:56:27 crc kubenswrapper[4869]: I0127 09:56:27.234304 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/603a2845-c212-4aed-9faa-8e691d4229b9-kubelet-dir\") pod \"603a2845-c212-4aed-9faa-8e691d4229b9\" (UID: \"603a2845-c212-4aed-9faa-8e691d4229b9\") " Jan 27 09:56:27 crc kubenswrapper[4869]: I0127 09:56:27.234716 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/603a2845-c212-4aed-9faa-8e691d4229b9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "603a2845-c212-4aed-9faa-8e691d4229b9" (UID: "603a2845-c212-4aed-9faa-8e691d4229b9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:56:27 crc kubenswrapper[4869]: I0127 09:56:27.256574 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/603a2845-c212-4aed-9faa-8e691d4229b9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "603a2845-c212-4aed-9faa-8e691d4229b9" (UID: "603a2845-c212-4aed-9faa-8e691d4229b9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:56:27 crc kubenswrapper[4869]: I0127 09:56:27.343559 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/603a2845-c212-4aed-9faa-8e691d4229b9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 09:56:27 crc kubenswrapper[4869]: I0127 09:56:27.343595 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/603a2845-c212-4aed-9faa-8e691d4229b9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 09:56:27 crc kubenswrapper[4869]: I0127 09:56:27.742795 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"24a183f2-7b22-456b-84cd-2f68c1760127","Type":"ContainerStarted","Data":"b4454382960f41305bf2bce357455ad2cafeba5c8b096a2d0136a5a05f79526c"} Jan 27 09:56:27 crc kubenswrapper[4869]: I0127 09:56:27.745720 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"603a2845-c212-4aed-9faa-8e691d4229b9","Type":"ContainerDied","Data":"aae43f7b3f5cb5dedba0862313e4eb1c0a11eb51b7034a60a3ea9a4b55ffecb4"} Jan 27 09:56:27 crc kubenswrapper[4869]: I0127 09:56:27.745743 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aae43f7b3f5cb5dedba0862313e4eb1c0a11eb51b7034a60a3ea9a4b55ffecb4" Jan 27 09:56:27 crc kubenswrapper[4869]: I0127 09:56:27.745799 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 09:56:28 crc kubenswrapper[4869]: I0127 09:56:28.782877 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"24a183f2-7b22-456b-84cd-2f68c1760127","Type":"ContainerStarted","Data":"a5bae3ba6fd9844ca736ba15dde7a71d7f056560269a3dab0dfa40313b7b9ec8"} Jan 27 09:56:28 crc kubenswrapper[4869]: I0127 09:56:28.799579 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.799547593 podStartE2EDuration="2.799547593s" podCreationTimestamp="2026-01-27 09:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:28.79947808 +0000 UTC m=+157.419902163" watchObservedRunningTime="2026-01-27 09:56:28.799547593 +0000 UTC m=+157.419971676" Jan 27 09:56:29 crc kubenswrapper[4869]: I0127 09:56:29.799846 4869 generic.go:334] "Generic (PLEG): container finished" podID="24a183f2-7b22-456b-84cd-2f68c1760127" containerID="a5bae3ba6fd9844ca736ba15dde7a71d7f056560269a3dab0dfa40313b7b9ec8" exitCode=0 Jan 27 09:56:29 crc kubenswrapper[4869]: I0127 09:56:29.799900 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"24a183f2-7b22-456b-84cd-2f68c1760127","Type":"ContainerDied","Data":"a5bae3ba6fd9844ca736ba15dde7a71d7f056560269a3dab0dfa40313b7b9ec8"} Jan 27 09:56:29 crc kubenswrapper[4869]: I0127 09:56:29.899287 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-q4j8x" Jan 27 09:56:31 crc kubenswrapper[4869]: I0127 09:56:31.181161 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 09:56:31 crc kubenswrapper[4869]: I0127 09:56:31.309029 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24a183f2-7b22-456b-84cd-2f68c1760127-kubelet-dir\") pod \"24a183f2-7b22-456b-84cd-2f68c1760127\" (UID: \"24a183f2-7b22-456b-84cd-2f68c1760127\") " Jan 27 09:56:31 crc kubenswrapper[4869]: I0127 09:56:31.309129 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24a183f2-7b22-456b-84cd-2f68c1760127-kube-api-access\") pod \"24a183f2-7b22-456b-84cd-2f68c1760127\" (UID: \"24a183f2-7b22-456b-84cd-2f68c1760127\") " Jan 27 09:56:31 crc kubenswrapper[4869]: I0127 09:56:31.309527 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24a183f2-7b22-456b-84cd-2f68c1760127-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "24a183f2-7b22-456b-84cd-2f68c1760127" (UID: "24a183f2-7b22-456b-84cd-2f68c1760127"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:56:31 crc kubenswrapper[4869]: I0127 09:56:31.314528 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a183f2-7b22-456b-84cd-2f68c1760127-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "24a183f2-7b22-456b-84cd-2f68c1760127" (UID: "24a183f2-7b22-456b-84cd-2f68c1760127"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:56:31 crc kubenswrapper[4869]: I0127 09:56:31.410549 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24a183f2-7b22-456b-84cd-2f68c1760127-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 09:56:31 crc kubenswrapper[4869]: I0127 09:56:31.410594 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24a183f2-7b22-456b-84cd-2f68c1760127-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 09:56:31 crc kubenswrapper[4869]: I0127 09:56:31.811666 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"24a183f2-7b22-456b-84cd-2f68c1760127","Type":"ContainerDied","Data":"b4454382960f41305bf2bce357455ad2cafeba5c8b096a2d0136a5a05f79526c"} Jan 27 09:56:31 crc kubenswrapper[4869]: I0127 09:56:31.812026 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4454382960f41305bf2bce357455ad2cafeba5c8b096a2d0136a5a05f79526c" Jan 27 09:56:31 crc kubenswrapper[4869]: I0127 09:56:31.811980 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 09:56:33 crc kubenswrapper[4869]: I0127 09:56:33.508024 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:33 crc kubenswrapper[4869]: I0127 09:56:33.512026 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-q86c4" Jan 27 09:56:33 crc kubenswrapper[4869]: I0127 09:56:33.793589 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-dpsrp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 09:56:33 crc kubenswrapper[4869]: I0127 09:56:33.793683 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dpsrp" podUID="493c38dc-c859-4715-b97f-be1388ee2162" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 09:56:33 crc kubenswrapper[4869]: I0127 09:56:33.793598 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-dpsrp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 27 09:56:33 crc kubenswrapper[4869]: I0127 09:56:33.793775 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dpsrp" podUID="493c38dc-c859-4715-b97f-be1388ee2162" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 27 09:56:33 crc kubenswrapper[4869]: I0127 09:56:33.876964 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs\") pod \"network-metrics-daemon-p5frm\" (UID: \"0bf72cba-f163-4dc2-b157-cfeb56d0177b\") " pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:56:33 crc kubenswrapper[4869]: I0127 09:56:33.882415 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0bf72cba-f163-4dc2-b157-cfeb56d0177b-metrics-certs\") pod \"network-metrics-daemon-p5frm\" (UID: \"0bf72cba-f163-4dc2-b157-cfeb56d0177b\") " pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:56:34 crc kubenswrapper[4869]: I0127 09:56:34.085650 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p5frm" Jan 27 09:56:41 crc kubenswrapper[4869]: I0127 09:56:41.638791 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 09:56:43 crc kubenswrapper[4869]: I0127 09:56:43.797252 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-dpsrp" Jan 27 09:56:45 crc kubenswrapper[4869]: I0127 09:56:45.697537 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:56:45 crc kubenswrapper[4869]: I0127 09:56:45.697606 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:56:51 crc kubenswrapper[4869]: E0127 09:56:51.680385 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 09:56:51 crc kubenswrapper[4869]: E0127 09:56:51.680885 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xv9p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-c6zf4_openshift-marketplace(fe066c30-021e-4a80-8541-148eec52dde8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 09:56:51 crc kubenswrapper[4869]: E0127 09:56:51.682034 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-c6zf4" podUID="fe066c30-021e-4a80-8541-148eec52dde8" Jan 27 09:56:52 crc kubenswrapper[4869]: E0127 09:56:52.848728 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-c6zf4" podUID="fe066c30-021e-4a80-8541-148eec52dde8" Jan 27 09:56:52 crc kubenswrapper[4869]: E0127 09:56:52.997160 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 09:56:52 crc kubenswrapper[4869]: E0127 09:56:52.997293 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw7c9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qz25t_openshift-marketplace(deb3e386-81b3-48d9-ba20-8a27ea09d026): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 09:56:52 crc kubenswrapper[4869]: E0127 09:56:52.998613 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qz25t" podUID="deb3e386-81b3-48d9-ba20-8a27ea09d026" Jan 27 09:56:53 crc kubenswrapper[4869]: I0127 09:56:53.282936 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-p5frm"] Jan 27 09:56:53 crc kubenswrapper[4869]: I0127 09:56:53.947025 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jw59" Jan 27 09:56:53 crc kubenswrapper[4869]: I0127 09:56:53.971883 4869 generic.go:334] "Generic (PLEG): container finished" podID="75088e3e-820e-444a-b9d1-ed7be4c7bbad" containerID="667b800e52b99017a2f1cdc68ffaa993d23dd5668e36b40f4de2bdee33d58e83" exitCode=0 Jan 27 09:56:53 crc kubenswrapper[4869]: I0127 09:56:53.971989 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgzqt" event={"ID":"75088e3e-820e-444a-b9d1-ed7be4c7bbad","Type":"ContainerDied","Data":"667b800e52b99017a2f1cdc68ffaa993d23dd5668e36b40f4de2bdee33d58e83"} Jan 27 09:56:53 crc kubenswrapper[4869]: I0127 09:56:53.979699 4869 generic.go:334] "Generic (PLEG): container finished" podID="7d235ac0-6891-411b-8d02-2333775dcb9a" containerID="206a1eed8b2c48e0c4988cb227f88fffbff9a0094744083cd2ba72554003c7ed" exitCode=0 Jan 27 09:56:53 crc kubenswrapper[4869]: I0127 09:56:53.979793 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbnv9" event={"ID":"7d235ac0-6891-411b-8d02-2333775dcb9a","Type":"ContainerDied","Data":"206a1eed8b2c48e0c4988cb227f88fffbff9a0094744083cd2ba72554003c7ed"} Jan 27 09:56:53 crc kubenswrapper[4869]: I0127 09:56:53.984009 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-p5frm" event={"ID":"0bf72cba-f163-4dc2-b157-cfeb56d0177b","Type":"ContainerStarted","Data":"567e956d3775ac08863a5ccf1893eec844f3089011ba6b6a0539144d7d15fb30"} Jan 27 09:56:53 crc kubenswrapper[4869]: I0127 09:56:53.985212 4869 generic.go:334] "Generic (PLEG): container finished" podID="51d71dd0-a5ff-4891-8801-03d66bb6994c" containerID="e2542476ac1af2ecd3cef63937e0cc2f24a8b7e73b138ecebaf7dcbee4175540" exitCode=0 Jan 27 09:56:53 crc kubenswrapper[4869]: I0127 09:56:53.985263 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbhz7" event={"ID":"51d71dd0-a5ff-4891-8801-03d66bb6994c","Type":"ContainerDied","Data":"e2542476ac1af2ecd3cef63937e0cc2f24a8b7e73b138ecebaf7dcbee4175540"} Jan 27 09:56:54 crc kubenswrapper[4869]: I0127 09:56:54.013762 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chwp4" event={"ID":"3e2c5b6e-1f12-4906-b2f8-303354595a04","Type":"ContainerStarted","Data":"fdc3a9de0d776ab617bc8ed90e25adbde77fd54d84a488473ecfc8930e4d558e"} Jan 27 09:56:54 crc kubenswrapper[4869]: I0127 09:56:54.017735 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb57z" event={"ID":"79eecc44-f04a-43b0-ae75-84843aa45574","Type":"ContainerStarted","Data":"caf88914c59daf47ba44a4e0941a46d4f382a2e3035b31cf575d47287ce5b18f"} Jan 27 09:56:54 crc kubenswrapper[4869]: I0127 09:56:54.021738 4869 generic.go:334] "Generic (PLEG): container finished" podID="95593b9c-39c7-40b7-aadc-4b8292206b30" containerID="b03ea17ec7836c693659a6962da9f3e53567a7019d7f212407e60a8f3e8c63dd" exitCode=0 Jan 27 09:56:54 crc kubenswrapper[4869]: I0127 09:56:54.021851 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8njf" event={"ID":"95593b9c-39c7-40b7-aadc-4b8292206b30","Type":"ContainerDied","Data":"b03ea17ec7836c693659a6962da9f3e53567a7019d7f212407e60a8f3e8c63dd"} Jan 27 09:56:54 crc kubenswrapper[4869]: E0127 09:56:54.024446 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qz25t" podUID="deb3e386-81b3-48d9-ba20-8a27ea09d026" Jan 27 09:56:55 crc kubenswrapper[4869]: I0127 09:56:55.029655 4869 generic.go:334] "Generic (PLEG): container finished" podID="79eecc44-f04a-43b0-ae75-84843aa45574" containerID="caf88914c59daf47ba44a4e0941a46d4f382a2e3035b31cf575d47287ce5b18f" exitCode=0 Jan 27 09:56:55 crc kubenswrapper[4869]: I0127 09:56:55.029880 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb57z" event={"ID":"79eecc44-f04a-43b0-ae75-84843aa45574","Type":"ContainerDied","Data":"caf88914c59daf47ba44a4e0941a46d4f382a2e3035b31cf575d47287ce5b18f"} Jan 27 09:56:55 crc kubenswrapper[4869]: I0127 09:56:55.032406 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-p5frm" event={"ID":"0bf72cba-f163-4dc2-b157-cfeb56d0177b","Type":"ContainerStarted","Data":"5bce31efc0cadd4471d6d437a426c9e00c1fab434d2c76bc8bfe7804ae603b20"} Jan 27 09:56:55 crc kubenswrapper[4869]: I0127 09:56:55.032454 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-p5frm" event={"ID":"0bf72cba-f163-4dc2-b157-cfeb56d0177b","Type":"ContainerStarted","Data":"bc48bbc5a84830d1d4f6e09bc6d2a8896276f9c2c8d08b2a722fe7dc5c2c5e03"} Jan 27 09:56:55 crc kubenswrapper[4869]: I0127 09:56:55.035072 4869 generic.go:334] "Generic (PLEG): container finished" podID="3e2c5b6e-1f12-4906-b2f8-303354595a04" containerID="fdc3a9de0d776ab617bc8ed90e25adbde77fd54d84a488473ecfc8930e4d558e" exitCode=0 Jan 27 09:56:55 crc kubenswrapper[4869]: I0127 09:56:55.035107 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chwp4" event={"ID":"3e2c5b6e-1f12-4906-b2f8-303354595a04","Type":"ContainerDied","Data":"fdc3a9de0d776ab617bc8ed90e25adbde77fd54d84a488473ecfc8930e4d558e"} Jan 27 09:56:55 crc kubenswrapper[4869]: I0127 09:56:55.064280 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-p5frm" podStartSLOduration=164.06425907 podStartE2EDuration="2m44.06425907s" podCreationTimestamp="2026-01-27 09:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:56:55.057251638 +0000 UTC m=+183.677675741" watchObservedRunningTime="2026-01-27 09:56:55.06425907 +0000 UTC m=+183.684683153" Jan 27 09:57:00 crc kubenswrapper[4869]: I0127 09:57:00.023042 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 09:57:01 crc kubenswrapper[4869]: I0127 09:57:01.066014 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbnv9" event={"ID":"7d235ac0-6891-411b-8d02-2333775dcb9a","Type":"ContainerStarted","Data":"ddfc180a6fb6c8fc8288958ffeff9758853506f64450f92cf84f25545b37d905"} Jan 27 09:57:01 crc kubenswrapper[4869]: I0127 09:57:01.092987 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mbnv9" podStartSLOduration=5.398261135 podStartE2EDuration="38.092944968s" podCreationTimestamp="2026-01-27 09:56:23 +0000 UTC" firstStartedPulling="2026-01-27 09:56:24.691469018 +0000 UTC m=+153.311893101" lastFinishedPulling="2026-01-27 09:56:57.386152851 +0000 UTC m=+186.006576934" observedRunningTime="2026-01-27 09:57:01.087907161 +0000 UTC m=+189.708331234" watchObservedRunningTime="2026-01-27 09:57:01.092944968 +0000 UTC m=+189.713369081" Jan 27 09:57:02 crc kubenswrapper[4869]: I0127 09:57:02.072877 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgzqt" event={"ID":"75088e3e-820e-444a-b9d1-ed7be4c7bbad","Type":"ContainerStarted","Data":"faf10e488cc5654ed22011cc18359c8077725f40fcbdc7cc37efffaed295efd0"} Jan 27 09:57:02 crc kubenswrapper[4869]: I0127 09:57:02.075328 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbhz7" event={"ID":"51d71dd0-a5ff-4891-8801-03d66bb6994c","Type":"ContainerStarted","Data":"e54aadf0f3fd58905482da1a22864fae45824d5f0703f8eb256286db78130a69"} Jan 27 09:57:02 crc kubenswrapper[4869]: I0127 09:57:02.099568 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kgzqt" podStartSLOduration=5.419120634 podStartE2EDuration="42.099549823s" podCreationTimestamp="2026-01-27 09:56:20 +0000 UTC" firstStartedPulling="2026-01-27 09:56:22.618850293 +0000 UTC m=+151.239274376" lastFinishedPulling="2026-01-27 09:56:59.299279472 +0000 UTC m=+187.919703565" observedRunningTime="2026-01-27 09:57:02.097511457 +0000 UTC m=+190.717935540" watchObservedRunningTime="2026-01-27 09:57:02.099549823 +0000 UTC m=+190.719973906" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.081421 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chwp4" event={"ID":"3e2c5b6e-1f12-4906-b2f8-303354595a04","Type":"ContainerStarted","Data":"90bb645ec3bc5542d292219ec70a6a4bf9cc843b988aefb3048c63ab2033da69"} Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.084549 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb57z" event={"ID":"79eecc44-f04a-43b0-ae75-84843aa45574","Type":"ContainerStarted","Data":"776984e663ab8849e9a964d4a9ad7b434cb081cb9a01123e558237789cc93903"} Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.086283 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8njf" event={"ID":"95593b9c-39c7-40b7-aadc-4b8292206b30","Type":"ContainerStarted","Data":"7ed50e5055fa75d1fea4ccd50903eba5600c2842b650663bd6f1fdeb14cd4e50"} Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.108913 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bbhz7" podStartSLOduration=4.787857114 podStartE2EDuration="42.108894667s" podCreationTimestamp="2026-01-27 09:56:21 +0000 UTC" firstStartedPulling="2026-01-27 09:56:22.620256869 +0000 UTC m=+151.240680952" lastFinishedPulling="2026-01-27 09:56:59.941294412 +0000 UTC m=+188.561718505" observedRunningTime="2026-01-27 09:57:02.122286828 +0000 UTC m=+190.742710911" watchObservedRunningTime="2026-01-27 09:57:03.108894667 +0000 UTC m=+191.729318750" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.110338 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-chwp4" podStartSLOduration=2.725849477 podStartE2EDuration="39.110328845s" podCreationTimestamp="2026-01-27 09:56:24 +0000 UTC" firstStartedPulling="2026-01-27 09:56:25.714203784 +0000 UTC m=+154.334627867" lastFinishedPulling="2026-01-27 09:57:02.098683152 +0000 UTC m=+190.719107235" observedRunningTime="2026-01-27 09:57:03.109412881 +0000 UTC m=+191.729836974" watchObservedRunningTime="2026-01-27 09:57:03.110328845 +0000 UTC m=+191.730752928" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.131380 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b8njf" podStartSLOduration=4.79695723 podStartE2EDuration="43.131366328s" podCreationTimestamp="2026-01-27 09:56:20 +0000 UTC" firstStartedPulling="2026-01-27 09:56:22.623593466 +0000 UTC m=+151.244017549" lastFinishedPulling="2026-01-27 09:57:00.958002564 +0000 UTC m=+189.578426647" observedRunningTime="2026-01-27 09:57:03.130100069 +0000 UTC m=+191.750524142" watchObservedRunningTime="2026-01-27 09:57:03.131366328 +0000 UTC m=+191.751790411" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.153909 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lb57z" podStartSLOduration=3.824534379 podStartE2EDuration="40.153889482s" podCreationTimestamp="2026-01-27 09:56:23 +0000 UTC" firstStartedPulling="2026-01-27 09:56:25.718444625 +0000 UTC m=+154.338868708" lastFinishedPulling="2026-01-27 09:57:02.047799728 +0000 UTC m=+190.668223811" observedRunningTime="2026-01-27 09:57:03.150777165 +0000 UTC m=+191.771201248" watchObservedRunningTime="2026-01-27 09:57:03.153889482 +0000 UTC m=+191.774313565" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.488040 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mbnv9" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.488412 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mbnv9" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.769232 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 09:57:03 crc kubenswrapper[4869]: E0127 09:57:03.769479 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a183f2-7b22-456b-84cd-2f68c1760127" containerName="pruner" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.769494 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a183f2-7b22-456b-84cd-2f68c1760127" containerName="pruner" Jan 27 09:57:03 crc kubenswrapper[4869]: E0127 09:57:03.769520 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603a2845-c212-4aed-9faa-8e691d4229b9" containerName="pruner" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.769528 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="603a2845-c212-4aed-9faa-8e691d4229b9" containerName="pruner" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.769667 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="603a2845-c212-4aed-9faa-8e691d4229b9" containerName="pruner" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.769680 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a183f2-7b22-456b-84cd-2f68c1760127" containerName="pruner" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.770139 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.772447 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.773143 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.784172 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.802894 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d39f3098-1019-4a8f-8425-8af8ab0e318c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d39f3098-1019-4a8f-8425-8af8ab0e318c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.802949 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d39f3098-1019-4a8f-8425-8af8ab0e318c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d39f3098-1019-4a8f-8425-8af8ab0e318c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.903629 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d39f3098-1019-4a8f-8425-8af8ab0e318c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d39f3098-1019-4a8f-8425-8af8ab0e318c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.903734 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d39f3098-1019-4a8f-8425-8af8ab0e318c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d39f3098-1019-4a8f-8425-8af8ab0e318c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.903751 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d39f3098-1019-4a8f-8425-8af8ab0e318c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d39f3098-1019-4a8f-8425-8af8ab0e318c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 09:57:03 crc kubenswrapper[4869]: I0127 09:57:03.923596 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d39f3098-1019-4a8f-8425-8af8ab0e318c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d39f3098-1019-4a8f-8425-8af8ab0e318c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 09:57:04 crc kubenswrapper[4869]: I0127 09:57:04.084621 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 09:57:04 crc kubenswrapper[4869]: I0127 09:57:04.323322 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lb57z" Jan 27 09:57:04 crc kubenswrapper[4869]: I0127 09:57:04.323629 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lb57z" Jan 27 09:57:04 crc kubenswrapper[4869]: I0127 09:57:04.554700 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 09:57:04 crc kubenswrapper[4869]: W0127 09:57:04.568670 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd39f3098_1019_4a8f_8425_8af8ab0e318c.slice/crio-11c2883a98ada130ca43067c7f4afa420b1850fd0249db4f7040fee61c11b9f3 WatchSource:0}: Error finding container 11c2883a98ada130ca43067c7f4afa420b1850fd0249db4f7040fee61c11b9f3: Status 404 returned error can't find the container with id 11c2883a98ada130ca43067c7f4afa420b1850fd0249db4f7040fee61c11b9f3 Jan 27 09:57:04 crc kubenswrapper[4869]: I0127 09:57:04.692074 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-chwp4" Jan 27 09:57:04 crc kubenswrapper[4869]: I0127 09:57:04.692408 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-chwp4" Jan 27 09:57:04 crc kubenswrapper[4869]: I0127 09:57:04.883560 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mbnv9" podUID="7d235ac0-6891-411b-8d02-2333775dcb9a" containerName="registry-server" probeResult="failure" output=< Jan 27 09:57:04 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Jan 27 09:57:04 crc kubenswrapper[4869]: > Jan 27 09:57:05 crc kubenswrapper[4869]: I0127 09:57:05.097719 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d39f3098-1019-4a8f-8425-8af8ab0e318c","Type":"ContainerStarted","Data":"8fbfcf6abe38609355187cec8dc591baf53c029846d132537de3731361c0ad9a"} Jan 27 09:57:05 crc kubenswrapper[4869]: I0127 09:57:05.097751 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d39f3098-1019-4a8f-8425-8af8ab0e318c","Type":"ContainerStarted","Data":"11c2883a98ada130ca43067c7f4afa420b1850fd0249db4f7040fee61c11b9f3"} Jan 27 09:57:05 crc kubenswrapper[4869]: I0127 09:57:05.111652 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.111634382 podStartE2EDuration="2.111634382s" podCreationTimestamp="2026-01-27 09:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:57:05.108611517 +0000 UTC m=+193.729035600" watchObservedRunningTime="2026-01-27 09:57:05.111634382 +0000 UTC m=+193.732058465" Jan 27 09:57:05 crc kubenswrapper[4869]: I0127 09:57:05.367757 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lb57z" podUID="79eecc44-f04a-43b0-ae75-84843aa45574" containerName="registry-server" probeResult="failure" output=< Jan 27 09:57:05 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Jan 27 09:57:05 crc kubenswrapper[4869]: > Jan 27 09:57:05 crc kubenswrapper[4869]: I0127 09:57:05.737489 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-chwp4" podUID="3e2c5b6e-1f12-4906-b2f8-303354595a04" containerName="registry-server" probeResult="failure" output=< Jan 27 09:57:05 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Jan 27 09:57:05 crc kubenswrapper[4869]: > Jan 27 09:57:06 crc kubenswrapper[4869]: I0127 09:57:06.102396 4869 generic.go:334] "Generic (PLEG): container finished" podID="d39f3098-1019-4a8f-8425-8af8ab0e318c" containerID="8fbfcf6abe38609355187cec8dc591baf53c029846d132537de3731361c0ad9a" exitCode=0 Jan 27 09:57:06 crc kubenswrapper[4869]: I0127 09:57:06.102440 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d39f3098-1019-4a8f-8425-8af8ab0e318c","Type":"ContainerDied","Data":"8fbfcf6abe38609355187cec8dc591baf53c029846d132537de3731361c0ad9a"} Jan 27 09:57:07 crc kubenswrapper[4869]: I0127 09:57:07.121797 4869 generic.go:334] "Generic (PLEG): container finished" podID="fe066c30-021e-4a80-8541-148eec52dde8" containerID="067d46ca1e673881bb32ca4592264987e3197898dc4f6e217c5dd1b6094a9192" exitCode=0 Jan 27 09:57:07 crc kubenswrapper[4869]: I0127 09:57:07.121876 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6zf4" event={"ID":"fe066c30-021e-4a80-8541-148eec52dde8","Type":"ContainerDied","Data":"067d46ca1e673881bb32ca4592264987e3197898dc4f6e217c5dd1b6094a9192"} Jan 27 09:57:07 crc kubenswrapper[4869]: I0127 09:57:07.376475 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 09:57:07 crc kubenswrapper[4869]: I0127 09:57:07.378905 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d39f3098-1019-4a8f-8425-8af8ab0e318c-kubelet-dir\") pod \"d39f3098-1019-4a8f-8425-8af8ab0e318c\" (UID: \"d39f3098-1019-4a8f-8425-8af8ab0e318c\") " Jan 27 09:57:07 crc kubenswrapper[4869]: I0127 09:57:07.378969 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d39f3098-1019-4a8f-8425-8af8ab0e318c-kube-api-access\") pod \"d39f3098-1019-4a8f-8425-8af8ab0e318c\" (UID: \"d39f3098-1019-4a8f-8425-8af8ab0e318c\") " Jan 27 09:57:07 crc kubenswrapper[4869]: I0127 09:57:07.379032 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d39f3098-1019-4a8f-8425-8af8ab0e318c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d39f3098-1019-4a8f-8425-8af8ab0e318c" (UID: "d39f3098-1019-4a8f-8425-8af8ab0e318c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:57:07 crc kubenswrapper[4869]: I0127 09:57:07.379274 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d39f3098-1019-4a8f-8425-8af8ab0e318c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:07 crc kubenswrapper[4869]: I0127 09:57:07.386984 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d39f3098-1019-4a8f-8425-8af8ab0e318c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d39f3098-1019-4a8f-8425-8af8ab0e318c" (UID: "d39f3098-1019-4a8f-8425-8af8ab0e318c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:57:07 crc kubenswrapper[4869]: I0127 09:57:07.481035 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d39f3098-1019-4a8f-8425-8af8ab0e318c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:08 crc kubenswrapper[4869]: I0127 09:57:08.130960 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d39f3098-1019-4a8f-8425-8af8ab0e318c","Type":"ContainerDied","Data":"11c2883a98ada130ca43067c7f4afa420b1850fd0249db4f7040fee61c11b9f3"} Jan 27 09:57:08 crc kubenswrapper[4869]: I0127 09:57:08.131047 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11c2883a98ada130ca43067c7f4afa420b1850fd0249db4f7040fee61c11b9f3" Jan 27 09:57:08 crc kubenswrapper[4869]: I0127 09:57:08.131125 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 09:57:10 crc kubenswrapper[4869]: I0127 09:57:10.142421 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6zf4" event={"ID":"fe066c30-021e-4a80-8541-148eec52dde8","Type":"ContainerStarted","Data":"c88f19cfcf2d5a68ed193a176968ebec7a04455eec5d30890ec334ec49c785d9"} Jan 27 09:57:10 crc kubenswrapper[4869]: I0127 09:57:10.164267 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c6zf4" podStartSLOduration=2.6308329759999998 podStartE2EDuration="49.164249957s" podCreationTimestamp="2026-01-27 09:56:21 +0000 UTC" firstStartedPulling="2026-01-27 09:56:22.626930974 +0000 UTC m=+151.247355057" lastFinishedPulling="2026-01-27 09:57:09.160347955 +0000 UTC m=+197.780772038" observedRunningTime="2026-01-27 09:57:10.1625107 +0000 UTC m=+198.782934783" watchObservedRunningTime="2026-01-27 09:57:10.164249957 +0000 UTC m=+198.784674040" Jan 27 09:57:10 crc kubenswrapper[4869]: I0127 09:57:10.767202 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 09:57:10 crc kubenswrapper[4869]: E0127 09:57:10.767752 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d39f3098-1019-4a8f-8425-8af8ab0e318c" containerName="pruner" Jan 27 09:57:10 crc kubenswrapper[4869]: I0127 09:57:10.767769 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d39f3098-1019-4a8f-8425-8af8ab0e318c" containerName="pruner" Jan 27 09:57:10 crc kubenswrapper[4869]: I0127 09:57:10.767909 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d39f3098-1019-4a8f-8425-8af8ab0e318c" containerName="pruner" Jan 27 09:57:10 crc kubenswrapper[4869]: I0127 09:57:10.768371 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 09:57:10 crc kubenswrapper[4869]: I0127 09:57:10.770015 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 09:57:10 crc kubenswrapper[4869]: I0127 09:57:10.770144 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 09:57:10 crc kubenswrapper[4869]: I0127 09:57:10.778454 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 09:57:10 crc kubenswrapper[4869]: I0127 09:57:10.923947 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 09:57:10 crc kubenswrapper[4869]: I0127 09:57:10.924593 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-var-lock\") pod \"installer-9-crc\" (UID: \"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 09:57:10 crc kubenswrapper[4869]: I0127 09:57:10.924719 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-kube-api-access\") pod \"installer-9-crc\" (UID: \"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.026219 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.026812 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-var-lock\") pod \"installer-9-crc\" (UID: \"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.026953 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-kube-api-access\") pod \"installer-9-crc\" (UID: \"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.027051 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-var-lock\") pod \"installer-9-crc\" (UID: \"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.026371 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.046814 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-kube-api-access\") pod \"installer-9-crc\" (UID: \"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.085504 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.104799 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b8njf" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.104886 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b8njf" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.153615 4869 generic.go:334] "Generic (PLEG): container finished" podID="deb3e386-81b3-48d9-ba20-8a27ea09d026" containerID="e8b730a1b112b6b4e05da871aefaf0510e0ecdeceecf4a900286bd73a1cf53fd" exitCode=0 Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.153667 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz25t" event={"ID":"deb3e386-81b3-48d9-ba20-8a27ea09d026","Type":"ContainerDied","Data":"e8b730a1b112b6b4e05da871aefaf0510e0ecdeceecf4a900286bd73a1cf53fd"} Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.196806 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b8njf" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.244280 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b8njf" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.287246 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kgzqt" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.287302 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kgzqt" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.330665 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kgzqt" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.471250 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bbhz7" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.471579 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bbhz7" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.516852 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bbhz7" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.529262 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 09:57:11 crc kubenswrapper[4869]: W0127 09:57:11.536076 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8db33249_2c9b_4dbd_8e0c_3d7949bf2a3a.slice/crio-5ec0f91d7b62d253969087b0b683068e905035711cde68cbf282ca4ec6e077ea WatchSource:0}: Error finding container 5ec0f91d7b62d253969087b0b683068e905035711cde68cbf282ca4ec6e077ea: Status 404 returned error can't find the container with id 5ec0f91d7b62d253969087b0b683068e905035711cde68cbf282ca4ec6e077ea Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.710956 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c6zf4" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.711386 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c6zf4" Jan 27 09:57:11 crc kubenswrapper[4869]: I0127 09:57:11.762410 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c6zf4" Jan 27 09:57:12 crc kubenswrapper[4869]: I0127 09:57:12.160418 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a","Type":"ContainerStarted","Data":"0078a13767122b107cb3846ff83d3bcd52adee8aba2f0c31ac69b80976993ec4"} Jan 27 09:57:12 crc kubenswrapper[4869]: I0127 09:57:12.160501 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a","Type":"ContainerStarted","Data":"5ec0f91d7b62d253969087b0b683068e905035711cde68cbf282ca4ec6e077ea"} Jan 27 09:57:12 crc kubenswrapper[4869]: I0127 09:57:12.163065 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz25t" event={"ID":"deb3e386-81b3-48d9-ba20-8a27ea09d026","Type":"ContainerStarted","Data":"7a4ad4aea4cc82910318263e0ca1267c7abe58bdb1c27914c0a62fedfe86d35f"} Jan 27 09:57:12 crc kubenswrapper[4869]: I0127 09:57:12.177678 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.177659448 podStartE2EDuration="2.177659448s" podCreationTimestamp="2026-01-27 09:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:57:12.175965223 +0000 UTC m=+200.796389306" watchObservedRunningTime="2026-01-27 09:57:12.177659448 +0000 UTC m=+200.798083531" Jan 27 09:57:12 crc kubenswrapper[4869]: I0127 09:57:12.194974 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qz25t" podStartSLOduration=1.839783538 podStartE2EDuration="50.194950718s" podCreationTimestamp="2026-01-27 09:56:22 +0000 UTC" firstStartedPulling="2026-01-27 09:56:23.640610343 +0000 UTC m=+152.261034426" lastFinishedPulling="2026-01-27 09:57:11.995777523 +0000 UTC m=+200.616201606" observedRunningTime="2026-01-27 09:57:12.192657243 +0000 UTC m=+200.813081336" watchObservedRunningTime="2026-01-27 09:57:12.194950718 +0000 UTC m=+200.815374801" Jan 27 09:57:12 crc kubenswrapper[4869]: I0127 09:57:12.210628 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bbhz7" Jan 27 09:57:12 crc kubenswrapper[4869]: I0127 09:57:12.211148 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kgzqt" Jan 27 09:57:12 crc kubenswrapper[4869]: I0127 09:57:12.352032 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rnv4g"] Jan 27 09:57:13 crc kubenswrapper[4869]: I0127 09:57:13.090094 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qz25t" Jan 27 09:57:13 crc kubenswrapper[4869]: I0127 09:57:13.090145 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qz25t" Jan 27 09:57:13 crc kubenswrapper[4869]: I0127 09:57:13.389613 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bbhz7"] Jan 27 09:57:13 crc kubenswrapper[4869]: I0127 09:57:13.527869 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mbnv9" Jan 27 09:57:13 crc kubenswrapper[4869]: I0127 09:57:13.567292 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mbnv9" Jan 27 09:57:14 crc kubenswrapper[4869]: I0127 09:57:14.132711 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-qz25t" podUID="deb3e386-81b3-48d9-ba20-8a27ea09d026" containerName="registry-server" probeResult="failure" output=< Jan 27 09:57:14 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Jan 27 09:57:14 crc kubenswrapper[4869]: > Jan 27 09:57:14 crc kubenswrapper[4869]: I0127 09:57:14.368377 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lb57z" Jan 27 09:57:14 crc kubenswrapper[4869]: I0127 09:57:14.414026 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lb57z" Jan 27 09:57:14 crc kubenswrapper[4869]: I0127 09:57:14.728825 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-chwp4" Jan 27 09:57:14 crc kubenswrapper[4869]: I0127 09:57:14.772310 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-chwp4" Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.177501 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bbhz7" podUID="51d71dd0-a5ff-4891-8801-03d66bb6994c" containerName="registry-server" containerID="cri-o://e54aadf0f3fd58905482da1a22864fae45824d5f0703f8eb256286db78130a69" gracePeriod=2 Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.530222 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbhz7" Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.589209 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbnv9"] Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.589535 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mbnv9" podUID="7d235ac0-6891-411b-8d02-2333775dcb9a" containerName="registry-server" containerID="cri-o://ddfc180a6fb6c8fc8288958ffeff9758853506f64450f92cf84f25545b37d905" gracePeriod=2 Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.691752 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp5dn\" (UniqueName: \"kubernetes.io/projected/51d71dd0-a5ff-4891-8801-03d66bb6994c-kube-api-access-lp5dn\") pod \"51d71dd0-a5ff-4891-8801-03d66bb6994c\" (UID: \"51d71dd0-a5ff-4891-8801-03d66bb6994c\") " Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.691931 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51d71dd0-a5ff-4891-8801-03d66bb6994c-catalog-content\") pod \"51d71dd0-a5ff-4891-8801-03d66bb6994c\" (UID: \"51d71dd0-a5ff-4891-8801-03d66bb6994c\") " Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.691962 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51d71dd0-a5ff-4891-8801-03d66bb6994c-utilities\") pod \"51d71dd0-a5ff-4891-8801-03d66bb6994c\" (UID: \"51d71dd0-a5ff-4891-8801-03d66bb6994c\") " Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.692756 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51d71dd0-a5ff-4891-8801-03d66bb6994c-utilities" (OuterVolumeSpecName: "utilities") pod "51d71dd0-a5ff-4891-8801-03d66bb6994c" (UID: "51d71dd0-a5ff-4891-8801-03d66bb6994c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.702842 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.703146 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.703197 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.703904 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5"} pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.704017 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" containerID="cri-o://c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5" gracePeriod=600 Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.706377 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d71dd0-a5ff-4891-8801-03d66bb6994c-kube-api-access-lp5dn" (OuterVolumeSpecName: "kube-api-access-lp5dn") pod "51d71dd0-a5ff-4891-8801-03d66bb6994c" (UID: "51d71dd0-a5ff-4891-8801-03d66bb6994c"). InnerVolumeSpecName "kube-api-access-lp5dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.750041 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51d71dd0-a5ff-4891-8801-03d66bb6994c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51d71dd0-a5ff-4891-8801-03d66bb6994c" (UID: "51d71dd0-a5ff-4891-8801-03d66bb6994c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.793823 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp5dn\" (UniqueName: \"kubernetes.io/projected/51d71dd0-a5ff-4891-8801-03d66bb6994c-kube-api-access-lp5dn\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.793880 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51d71dd0-a5ff-4891-8801-03d66bb6994c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:15 crc kubenswrapper[4869]: I0127 09:57:15.793892 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51d71dd0-a5ff-4891-8801-03d66bb6994c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.190162 4869 generic.go:334] "Generic (PLEG): container finished" podID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerID="c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5" exitCode=0 Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.190235 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerDied","Data":"c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5"} Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.192048 4869 generic.go:334] "Generic (PLEG): container finished" podID="7d235ac0-6891-411b-8d02-2333775dcb9a" containerID="ddfc180a6fb6c8fc8288958ffeff9758853506f64450f92cf84f25545b37d905" exitCode=0 Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.192098 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbnv9" event={"ID":"7d235ac0-6891-411b-8d02-2333775dcb9a","Type":"ContainerDied","Data":"ddfc180a6fb6c8fc8288958ffeff9758853506f64450f92cf84f25545b37d905"} Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.195763 4869 generic.go:334] "Generic (PLEG): container finished" podID="51d71dd0-a5ff-4891-8801-03d66bb6994c" containerID="e54aadf0f3fd58905482da1a22864fae45824d5f0703f8eb256286db78130a69" exitCode=0 Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.195823 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bbhz7" Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.195816 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbhz7" event={"ID":"51d71dd0-a5ff-4891-8801-03d66bb6994c","Type":"ContainerDied","Data":"e54aadf0f3fd58905482da1a22864fae45824d5f0703f8eb256286db78130a69"} Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.196021 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bbhz7" event={"ID":"51d71dd0-a5ff-4891-8801-03d66bb6994c","Type":"ContainerDied","Data":"f4123809ca2cbed556d104426c353cd2209b04bd6933921a7429c41e2b48002c"} Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.196047 4869 scope.go:117] "RemoveContainer" containerID="e54aadf0f3fd58905482da1a22864fae45824d5f0703f8eb256286db78130a69" Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.208540 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bbhz7"] Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.213565 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bbhz7"] Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.218052 4869 scope.go:117] "RemoveContainer" containerID="e2542476ac1af2ecd3cef63937e0cc2f24a8b7e73b138ecebaf7dcbee4175540" Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.234032 4869 scope.go:117] "RemoveContainer" containerID="e72a48c0c8382e428e43f1a66979abf896a2196eb31f724f2d82c40e96f17977" Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.252250 4869 scope.go:117] "RemoveContainer" containerID="e54aadf0f3fd58905482da1a22864fae45824d5f0703f8eb256286db78130a69" Jan 27 09:57:16 crc kubenswrapper[4869]: E0127 09:57:16.252949 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e54aadf0f3fd58905482da1a22864fae45824d5f0703f8eb256286db78130a69\": container with ID starting with e54aadf0f3fd58905482da1a22864fae45824d5f0703f8eb256286db78130a69 not found: ID does not exist" containerID="e54aadf0f3fd58905482da1a22864fae45824d5f0703f8eb256286db78130a69" Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.252996 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e54aadf0f3fd58905482da1a22864fae45824d5f0703f8eb256286db78130a69"} err="failed to get container status \"e54aadf0f3fd58905482da1a22864fae45824d5f0703f8eb256286db78130a69\": rpc error: code = NotFound desc = could not find container \"e54aadf0f3fd58905482da1a22864fae45824d5f0703f8eb256286db78130a69\": container with ID starting with e54aadf0f3fd58905482da1a22864fae45824d5f0703f8eb256286db78130a69 not found: ID does not exist" Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.253028 4869 scope.go:117] "RemoveContainer" containerID="e2542476ac1af2ecd3cef63937e0cc2f24a8b7e73b138ecebaf7dcbee4175540" Jan 27 09:57:16 crc kubenswrapper[4869]: E0127 09:57:16.253602 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2542476ac1af2ecd3cef63937e0cc2f24a8b7e73b138ecebaf7dcbee4175540\": container with ID starting with e2542476ac1af2ecd3cef63937e0cc2f24a8b7e73b138ecebaf7dcbee4175540 not found: ID does not exist" containerID="e2542476ac1af2ecd3cef63937e0cc2f24a8b7e73b138ecebaf7dcbee4175540" Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.253627 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2542476ac1af2ecd3cef63937e0cc2f24a8b7e73b138ecebaf7dcbee4175540"} err="failed to get container status \"e2542476ac1af2ecd3cef63937e0cc2f24a8b7e73b138ecebaf7dcbee4175540\": rpc error: code = NotFound desc = could not find container \"e2542476ac1af2ecd3cef63937e0cc2f24a8b7e73b138ecebaf7dcbee4175540\": container with ID starting with e2542476ac1af2ecd3cef63937e0cc2f24a8b7e73b138ecebaf7dcbee4175540 not found: ID does not exist" Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.253647 4869 scope.go:117] "RemoveContainer" containerID="e72a48c0c8382e428e43f1a66979abf896a2196eb31f724f2d82c40e96f17977" Jan 27 09:57:16 crc kubenswrapper[4869]: E0127 09:57:16.253994 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e72a48c0c8382e428e43f1a66979abf896a2196eb31f724f2d82c40e96f17977\": container with ID starting with e72a48c0c8382e428e43f1a66979abf896a2196eb31f724f2d82c40e96f17977 not found: ID does not exist" containerID="e72a48c0c8382e428e43f1a66979abf896a2196eb31f724f2d82c40e96f17977" Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.254012 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e72a48c0c8382e428e43f1a66979abf896a2196eb31f724f2d82c40e96f17977"} err="failed to get container status \"e72a48c0c8382e428e43f1a66979abf896a2196eb31f724f2d82c40e96f17977\": rpc error: code = NotFound desc = could not find container \"e72a48c0c8382e428e43f1a66979abf896a2196eb31f724f2d82c40e96f17977\": container with ID starting with e72a48c0c8382e428e43f1a66979abf896a2196eb31f724f2d82c40e96f17977 not found: ID does not exist" Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.359233 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mbnv9" Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.500019 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lfzk\" (UniqueName: \"kubernetes.io/projected/7d235ac0-6891-411b-8d02-2333775dcb9a-kube-api-access-9lfzk\") pod \"7d235ac0-6891-411b-8d02-2333775dcb9a\" (UID: \"7d235ac0-6891-411b-8d02-2333775dcb9a\") " Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.500397 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d235ac0-6891-411b-8d02-2333775dcb9a-utilities\") pod \"7d235ac0-6891-411b-8d02-2333775dcb9a\" (UID: \"7d235ac0-6891-411b-8d02-2333775dcb9a\") " Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.500439 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d235ac0-6891-411b-8d02-2333775dcb9a-catalog-content\") pod \"7d235ac0-6891-411b-8d02-2333775dcb9a\" (UID: \"7d235ac0-6891-411b-8d02-2333775dcb9a\") " Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.501180 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d235ac0-6891-411b-8d02-2333775dcb9a-utilities" (OuterVolumeSpecName: "utilities") pod "7d235ac0-6891-411b-8d02-2333775dcb9a" (UID: "7d235ac0-6891-411b-8d02-2333775dcb9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.507056 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d235ac0-6891-411b-8d02-2333775dcb9a-kube-api-access-9lfzk" (OuterVolumeSpecName: "kube-api-access-9lfzk") pod "7d235ac0-6891-411b-8d02-2333775dcb9a" (UID: "7d235ac0-6891-411b-8d02-2333775dcb9a"). InnerVolumeSpecName "kube-api-access-9lfzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.521163 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d235ac0-6891-411b-8d02-2333775dcb9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d235ac0-6891-411b-8d02-2333775dcb9a" (UID: "7d235ac0-6891-411b-8d02-2333775dcb9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.602002 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lfzk\" (UniqueName: \"kubernetes.io/projected/7d235ac0-6891-411b-8d02-2333775dcb9a-kube-api-access-9lfzk\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.602042 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d235ac0-6891-411b-8d02-2333775dcb9a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:16 crc kubenswrapper[4869]: I0127 09:57:16.602055 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d235ac0-6891-411b-8d02-2333775dcb9a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:17 crc kubenswrapper[4869]: I0127 09:57:17.203582 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mbnv9" event={"ID":"7d235ac0-6891-411b-8d02-2333775dcb9a","Type":"ContainerDied","Data":"2062a08c690dd093b42c5cef6f83e262fc27de48dd051156ccd65f009cf7d95c"} Jan 27 09:57:17 crc kubenswrapper[4869]: I0127 09:57:17.203635 4869 scope.go:117] "RemoveContainer" containerID="ddfc180a6fb6c8fc8288958ffeff9758853506f64450f92cf84f25545b37d905" Jan 27 09:57:17 crc kubenswrapper[4869]: I0127 09:57:17.203598 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mbnv9" Jan 27 09:57:17 crc kubenswrapper[4869]: I0127 09:57:17.207471 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerStarted","Data":"b2c22e450bc36c30d04521d1630cd32b9d97fe2a4e5e905590b0f57351fdac38"} Jan 27 09:57:17 crc kubenswrapper[4869]: I0127 09:57:17.219259 4869 scope.go:117] "RemoveContainer" containerID="206a1eed8b2c48e0c4988cb227f88fffbff9a0094744083cd2ba72554003c7ed" Jan 27 09:57:17 crc kubenswrapper[4869]: I0127 09:57:17.240162 4869 scope.go:117] "RemoveContainer" containerID="f627098da275ce2c39b6d0223acced9245e5eb5b3896bb09f27c547f28413895" Jan 27 09:57:17 crc kubenswrapper[4869]: I0127 09:57:17.247241 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbnv9"] Jan 27 09:57:17 crc kubenswrapper[4869]: I0127 09:57:17.247286 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mbnv9"] Jan 27 09:57:17 crc kubenswrapper[4869]: I0127 09:57:17.993660 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chwp4"] Jan 27 09:57:17 crc kubenswrapper[4869]: I0127 09:57:17.993909 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-chwp4" podUID="3e2c5b6e-1f12-4906-b2f8-303354595a04" containerName="registry-server" containerID="cri-o://90bb645ec3bc5542d292219ec70a6a4bf9cc843b988aefb3048c63ab2033da69" gracePeriod=2 Jan 27 09:57:18 crc kubenswrapper[4869]: I0127 09:57:18.039336 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51d71dd0-a5ff-4891-8801-03d66bb6994c" path="/var/lib/kubelet/pods/51d71dd0-a5ff-4891-8801-03d66bb6994c/volumes" Jan 27 09:57:18 crc kubenswrapper[4869]: I0127 09:57:18.040242 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d235ac0-6891-411b-8d02-2333775dcb9a" path="/var/lib/kubelet/pods/7d235ac0-6891-411b-8d02-2333775dcb9a/volumes" Jan 27 09:57:18 crc kubenswrapper[4869]: I0127 09:57:18.220538 4869 generic.go:334] "Generic (PLEG): container finished" podID="3e2c5b6e-1f12-4906-b2f8-303354595a04" containerID="90bb645ec3bc5542d292219ec70a6a4bf9cc843b988aefb3048c63ab2033da69" exitCode=0 Jan 27 09:57:18 crc kubenswrapper[4869]: I0127 09:57:18.220772 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chwp4" event={"ID":"3e2c5b6e-1f12-4906-b2f8-303354595a04","Type":"ContainerDied","Data":"90bb645ec3bc5542d292219ec70a6a4bf9cc843b988aefb3048c63ab2033da69"} Jan 27 09:57:18 crc kubenswrapper[4869]: I0127 09:57:18.358079 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chwp4" Jan 27 09:57:18 crc kubenswrapper[4869]: I0127 09:57:18.429851 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2c5b6e-1f12-4906-b2f8-303354595a04-catalog-content\") pod \"3e2c5b6e-1f12-4906-b2f8-303354595a04\" (UID: \"3e2c5b6e-1f12-4906-b2f8-303354595a04\") " Jan 27 09:57:18 crc kubenswrapper[4869]: I0127 09:57:18.429998 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2c5b6e-1f12-4906-b2f8-303354595a04-utilities\") pod \"3e2c5b6e-1f12-4906-b2f8-303354595a04\" (UID: \"3e2c5b6e-1f12-4906-b2f8-303354595a04\") " Jan 27 09:57:18 crc kubenswrapper[4869]: I0127 09:57:18.430020 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbvh7\" (UniqueName: \"kubernetes.io/projected/3e2c5b6e-1f12-4906-b2f8-303354595a04-kube-api-access-dbvh7\") pod \"3e2c5b6e-1f12-4906-b2f8-303354595a04\" (UID: \"3e2c5b6e-1f12-4906-b2f8-303354595a04\") " Jan 27 09:57:18 crc kubenswrapper[4869]: I0127 09:57:18.430806 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e2c5b6e-1f12-4906-b2f8-303354595a04-utilities" (OuterVolumeSpecName: "utilities") pod "3e2c5b6e-1f12-4906-b2f8-303354595a04" (UID: "3e2c5b6e-1f12-4906-b2f8-303354595a04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:57:18 crc kubenswrapper[4869]: I0127 09:57:18.435199 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e2c5b6e-1f12-4906-b2f8-303354595a04-kube-api-access-dbvh7" (OuterVolumeSpecName: "kube-api-access-dbvh7") pod "3e2c5b6e-1f12-4906-b2f8-303354595a04" (UID: "3e2c5b6e-1f12-4906-b2f8-303354595a04"). InnerVolumeSpecName "kube-api-access-dbvh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:57:18 crc kubenswrapper[4869]: I0127 09:57:18.532707 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2c5b6e-1f12-4906-b2f8-303354595a04-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:18 crc kubenswrapper[4869]: I0127 09:57:18.532753 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbvh7\" (UniqueName: \"kubernetes.io/projected/3e2c5b6e-1f12-4906-b2f8-303354595a04-kube-api-access-dbvh7\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:18 crc kubenswrapper[4869]: I0127 09:57:18.547591 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e2c5b6e-1f12-4906-b2f8-303354595a04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e2c5b6e-1f12-4906-b2f8-303354595a04" (UID: "3e2c5b6e-1f12-4906-b2f8-303354595a04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:57:18 crc kubenswrapper[4869]: I0127 09:57:18.634577 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2c5b6e-1f12-4906-b2f8-303354595a04-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:19 crc kubenswrapper[4869]: I0127 09:57:19.234367 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-chwp4" event={"ID":"3e2c5b6e-1f12-4906-b2f8-303354595a04","Type":"ContainerDied","Data":"e1591e613e1693fd032304c9de1dbee952c4ef2304bc0186b4571580e0dcf0c7"} Jan 27 09:57:19 crc kubenswrapper[4869]: I0127 09:57:19.234712 4869 scope.go:117] "RemoveContainer" containerID="90bb645ec3bc5542d292219ec70a6a4bf9cc843b988aefb3048c63ab2033da69" Jan 27 09:57:19 crc kubenswrapper[4869]: I0127 09:57:19.234417 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-chwp4" Jan 27 09:57:19 crc kubenswrapper[4869]: I0127 09:57:19.252327 4869 scope.go:117] "RemoveContainer" containerID="fdc3a9de0d776ab617bc8ed90e25adbde77fd54d84a488473ecfc8930e4d558e" Jan 27 09:57:19 crc kubenswrapper[4869]: I0127 09:57:19.263612 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-chwp4"] Jan 27 09:57:19 crc kubenswrapper[4869]: I0127 09:57:19.266023 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-chwp4"] Jan 27 09:57:19 crc kubenswrapper[4869]: I0127 09:57:19.290414 4869 scope.go:117] "RemoveContainer" containerID="bbf3dd7a6acabd0165f0cbad7ef22c80079d3c37281b92908fa2ea2f2f7a8e71" Jan 27 09:57:20 crc kubenswrapper[4869]: I0127 09:57:20.039032 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e2c5b6e-1f12-4906-b2f8-303354595a04" path="/var/lib/kubelet/pods/3e2c5b6e-1f12-4906-b2f8-303354595a04/volumes" Jan 27 09:57:21 crc kubenswrapper[4869]: I0127 09:57:21.769959 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c6zf4" Jan 27 09:57:22 crc kubenswrapper[4869]: I0127 09:57:22.995934 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c6zf4"] Jan 27 09:57:22 crc kubenswrapper[4869]: I0127 09:57:22.996273 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c6zf4" podUID="fe066c30-021e-4a80-8541-148eec52dde8" containerName="registry-server" containerID="cri-o://c88f19cfcf2d5a68ed193a176968ebec7a04455eec5d30890ec334ec49c785d9" gracePeriod=2 Jan 27 09:57:23 crc kubenswrapper[4869]: I0127 09:57:23.140047 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qz25t" Jan 27 09:57:23 crc kubenswrapper[4869]: I0127 09:57:23.189135 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qz25t" Jan 27 09:57:23 crc kubenswrapper[4869]: I0127 09:57:23.260062 4869 generic.go:334] "Generic (PLEG): container finished" podID="fe066c30-021e-4a80-8541-148eec52dde8" containerID="c88f19cfcf2d5a68ed193a176968ebec7a04455eec5d30890ec334ec49c785d9" exitCode=0 Jan 27 09:57:23 crc kubenswrapper[4869]: I0127 09:57:23.260134 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6zf4" event={"ID":"fe066c30-021e-4a80-8541-148eec52dde8","Type":"ContainerDied","Data":"c88f19cfcf2d5a68ed193a176968ebec7a04455eec5d30890ec334ec49c785d9"} Jan 27 09:57:23 crc kubenswrapper[4869]: I0127 09:57:23.358259 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6zf4" Jan 27 09:57:23 crc kubenswrapper[4869]: I0127 09:57:23.387732 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe066c30-021e-4a80-8541-148eec52dde8-catalog-content\") pod \"fe066c30-021e-4a80-8541-148eec52dde8\" (UID: \"fe066c30-021e-4a80-8541-148eec52dde8\") " Jan 27 09:57:23 crc kubenswrapper[4869]: I0127 09:57:23.387892 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe066c30-021e-4a80-8541-148eec52dde8-utilities\") pod \"fe066c30-021e-4a80-8541-148eec52dde8\" (UID: \"fe066c30-021e-4a80-8541-148eec52dde8\") " Jan 27 09:57:23 crc kubenswrapper[4869]: I0127 09:57:23.388000 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv9p2\" (UniqueName: \"kubernetes.io/projected/fe066c30-021e-4a80-8541-148eec52dde8-kube-api-access-xv9p2\") pod \"fe066c30-021e-4a80-8541-148eec52dde8\" (UID: \"fe066c30-021e-4a80-8541-148eec52dde8\") " Jan 27 09:57:23 crc kubenswrapper[4869]: I0127 09:57:23.388717 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe066c30-021e-4a80-8541-148eec52dde8-utilities" (OuterVolumeSpecName: "utilities") pod "fe066c30-021e-4a80-8541-148eec52dde8" (UID: "fe066c30-021e-4a80-8541-148eec52dde8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:57:23 crc kubenswrapper[4869]: I0127 09:57:23.394002 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe066c30-021e-4a80-8541-148eec52dde8-kube-api-access-xv9p2" (OuterVolumeSpecName: "kube-api-access-xv9p2") pod "fe066c30-021e-4a80-8541-148eec52dde8" (UID: "fe066c30-021e-4a80-8541-148eec52dde8"). InnerVolumeSpecName "kube-api-access-xv9p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:57:23 crc kubenswrapper[4869]: I0127 09:57:23.441408 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe066c30-021e-4a80-8541-148eec52dde8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe066c30-021e-4a80-8541-148eec52dde8" (UID: "fe066c30-021e-4a80-8541-148eec52dde8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:57:23 crc kubenswrapper[4869]: I0127 09:57:23.489919 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe066c30-021e-4a80-8541-148eec52dde8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:23 crc kubenswrapper[4869]: I0127 09:57:23.489958 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe066c30-021e-4a80-8541-148eec52dde8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:23 crc kubenswrapper[4869]: I0127 09:57:23.489975 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv9p2\" (UniqueName: \"kubernetes.io/projected/fe066c30-021e-4a80-8541-148eec52dde8-kube-api-access-xv9p2\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:24 crc kubenswrapper[4869]: I0127 09:57:24.268999 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6zf4" event={"ID":"fe066c30-021e-4a80-8541-148eec52dde8","Type":"ContainerDied","Data":"7db8e691e3fa02ea7a70ac3299d546afbf411dc0e8b8b956bc70b7585238fbfd"} Jan 27 09:57:24 crc kubenswrapper[4869]: I0127 09:57:24.269399 4869 scope.go:117] "RemoveContainer" containerID="c88f19cfcf2d5a68ed193a176968ebec7a04455eec5d30890ec334ec49c785d9" Jan 27 09:57:24 crc kubenswrapper[4869]: I0127 09:57:24.269192 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6zf4" Jan 27 09:57:24 crc kubenswrapper[4869]: I0127 09:57:24.290034 4869 scope.go:117] "RemoveContainer" containerID="067d46ca1e673881bb32ca4592264987e3197898dc4f6e217c5dd1b6094a9192" Jan 27 09:57:24 crc kubenswrapper[4869]: I0127 09:57:24.294276 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c6zf4"] Jan 27 09:57:24 crc kubenswrapper[4869]: I0127 09:57:24.299376 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c6zf4"] Jan 27 09:57:24 crc kubenswrapper[4869]: I0127 09:57:24.313804 4869 scope.go:117] "RemoveContainer" containerID="4ee52253452b8908ce112d64ec0d2a22bc9d11b088d23eef3a62e1528945c906" Jan 27 09:57:26 crc kubenswrapper[4869]: I0127 09:57:26.043881 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe066c30-021e-4a80-8541-148eec52dde8" path="/var/lib/kubelet/pods/fe066c30-021e-4a80-8541-148eec52dde8/volumes" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.376727 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" podUID="a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" containerName="oauth-openshift" containerID="cri-o://66f15107b2c646be50a91c143afe7c055cf67ae86a7fa002c5a7380957c4e54e" gracePeriod=15 Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.760802 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.880221 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-login\") pod \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.880264 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-ocp-branding-template\") pod \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.880292 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-idp-0-file-data\") pod \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.880314 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-audit-dir\") pod \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.880371 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-provider-selection\") pod \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.880393 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-trusted-ca-bundle\") pod \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.880417 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-cliconfig\") pod \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.880441 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-error\") pod \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.880482 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-service-ca\") pod \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.880496 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-audit-policies\") pod \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.880515 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-router-certs\") pod \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.880541 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-serving-cert\") pod \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.880576 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-session\") pod \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.880596 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4gwz\" (UniqueName: \"kubernetes.io/projected/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-kube-api-access-b4gwz\") pod \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\" (UID: \"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3\") " Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.882025 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" (UID: "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.882082 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" (UID: "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.882085 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" (UID: "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.882469 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" (UID: "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.883343 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" (UID: "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.886144 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" (UID: "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.886618 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" (UID: "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.887751 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-kube-api-access-b4gwz" (OuterVolumeSpecName: "kube-api-access-b4gwz") pod "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" (UID: "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3"). InnerVolumeSpecName "kube-api-access-b4gwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.889301 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" (UID: "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.890147 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" (UID: "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.890470 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" (UID: "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.890811 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" (UID: "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.892047 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" (UID: "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.892448 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" (UID: "a1cbbd0a-4425-4c44-a867-daaa6e90a6d3"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.982388 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.982424 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.982437 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.982447 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.982461 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.982470 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4gwz\" (UniqueName: \"kubernetes.io/projected/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-kube-api-access-b4gwz\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.982478 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.982488 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.982496 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.982505 4869 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.982513 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.982523 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.982533 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:37 crc kubenswrapper[4869]: I0127 09:57:37.982541 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:38 crc kubenswrapper[4869]: I0127 09:57:38.337343 4869 generic.go:334] "Generic (PLEG): container finished" podID="a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" containerID="66f15107b2c646be50a91c143afe7c055cf67ae86a7fa002c5a7380957c4e54e" exitCode=0 Jan 27 09:57:38 crc kubenswrapper[4869]: I0127 09:57:38.337388 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" event={"ID":"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3","Type":"ContainerDied","Data":"66f15107b2c646be50a91c143afe7c055cf67ae86a7fa002c5a7380957c4e54e"} Jan 27 09:57:38 crc kubenswrapper[4869]: I0127 09:57:38.337418 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" event={"ID":"a1cbbd0a-4425-4c44-a867-daaa6e90a6d3","Type":"ContainerDied","Data":"f8a9906b6c89f27dde8b80c01f245ce7d6c3e476b0871aae0f66773cdc3a7c66"} Jan 27 09:57:38 crc kubenswrapper[4869]: I0127 09:57:38.337412 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rnv4g" Jan 27 09:57:38 crc kubenswrapper[4869]: I0127 09:57:38.337436 4869 scope.go:117] "RemoveContainer" containerID="66f15107b2c646be50a91c143afe7c055cf67ae86a7fa002c5a7380957c4e54e" Jan 27 09:57:38 crc kubenswrapper[4869]: I0127 09:57:38.358646 4869 scope.go:117] "RemoveContainer" containerID="66f15107b2c646be50a91c143afe7c055cf67ae86a7fa002c5a7380957c4e54e" Jan 27 09:57:38 crc kubenswrapper[4869]: I0127 09:57:38.359321 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rnv4g"] Jan 27 09:57:38 crc kubenswrapper[4869]: E0127 09:57:38.359369 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66f15107b2c646be50a91c143afe7c055cf67ae86a7fa002c5a7380957c4e54e\": container with ID starting with 66f15107b2c646be50a91c143afe7c055cf67ae86a7fa002c5a7380957c4e54e not found: ID does not exist" containerID="66f15107b2c646be50a91c143afe7c055cf67ae86a7fa002c5a7380957c4e54e" Jan 27 09:57:38 crc kubenswrapper[4869]: I0127 09:57:38.359403 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66f15107b2c646be50a91c143afe7c055cf67ae86a7fa002c5a7380957c4e54e"} err="failed to get container status \"66f15107b2c646be50a91c143afe7c055cf67ae86a7fa002c5a7380957c4e54e\": rpc error: code = NotFound desc = could not find container \"66f15107b2c646be50a91c143afe7c055cf67ae86a7fa002c5a7380957c4e54e\": container with ID starting with 66f15107b2c646be50a91c143afe7c055cf67ae86a7fa002c5a7380957c4e54e not found: ID does not exist" Jan 27 09:57:38 crc kubenswrapper[4869]: I0127 09:57:38.365694 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rnv4g"] Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.842621 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-78558fc4d-bbbk8"] Jan 27 09:57:39 crc kubenswrapper[4869]: E0127 09:57:39.843034 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d235ac0-6891-411b-8d02-2333775dcb9a" containerName="extract-utilities" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843074 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d235ac0-6891-411b-8d02-2333775dcb9a" containerName="extract-utilities" Jan 27 09:57:39 crc kubenswrapper[4869]: E0127 09:57:39.843089 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d235ac0-6891-411b-8d02-2333775dcb9a" containerName="extract-content" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843098 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d235ac0-6891-411b-8d02-2333775dcb9a" containerName="extract-content" Jan 27 09:57:39 crc kubenswrapper[4869]: E0127 09:57:39.843111 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e2c5b6e-1f12-4906-b2f8-303354595a04" containerName="extract-content" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843120 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e2c5b6e-1f12-4906-b2f8-303354595a04" containerName="extract-content" Jan 27 09:57:39 crc kubenswrapper[4869]: E0127 09:57:39.843157 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" containerName="oauth-openshift" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843166 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" containerName="oauth-openshift" Jan 27 09:57:39 crc kubenswrapper[4869]: E0127 09:57:39.843177 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d235ac0-6891-411b-8d02-2333775dcb9a" containerName="registry-server" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843185 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d235ac0-6891-411b-8d02-2333775dcb9a" containerName="registry-server" Jan 27 09:57:39 crc kubenswrapper[4869]: E0127 09:57:39.843196 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e2c5b6e-1f12-4906-b2f8-303354595a04" containerName="registry-server" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843203 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e2c5b6e-1f12-4906-b2f8-303354595a04" containerName="registry-server" Jan 27 09:57:39 crc kubenswrapper[4869]: E0127 09:57:39.843236 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d71dd0-a5ff-4891-8801-03d66bb6994c" containerName="extract-utilities" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843245 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d71dd0-a5ff-4891-8801-03d66bb6994c" containerName="extract-utilities" Jan 27 09:57:39 crc kubenswrapper[4869]: E0127 09:57:39.843259 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d71dd0-a5ff-4891-8801-03d66bb6994c" containerName="registry-server" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843267 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d71dd0-a5ff-4891-8801-03d66bb6994c" containerName="registry-server" Jan 27 09:57:39 crc kubenswrapper[4869]: E0127 09:57:39.843279 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe066c30-021e-4a80-8541-148eec52dde8" containerName="extract-content" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843287 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe066c30-021e-4a80-8541-148eec52dde8" containerName="extract-content" Jan 27 09:57:39 crc kubenswrapper[4869]: E0127 09:57:39.843295 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e2c5b6e-1f12-4906-b2f8-303354595a04" containerName="extract-utilities" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843303 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e2c5b6e-1f12-4906-b2f8-303354595a04" containerName="extract-utilities" Jan 27 09:57:39 crc kubenswrapper[4869]: E0127 09:57:39.843315 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d71dd0-a5ff-4891-8801-03d66bb6994c" containerName="extract-content" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843322 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d71dd0-a5ff-4891-8801-03d66bb6994c" containerName="extract-content" Jan 27 09:57:39 crc kubenswrapper[4869]: E0127 09:57:39.843336 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe066c30-021e-4a80-8541-148eec52dde8" containerName="extract-utilities" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843345 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe066c30-021e-4a80-8541-148eec52dde8" containerName="extract-utilities" Jan 27 09:57:39 crc kubenswrapper[4869]: E0127 09:57:39.843355 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe066c30-021e-4a80-8541-148eec52dde8" containerName="registry-server" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843362 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe066c30-021e-4a80-8541-148eec52dde8" containerName="registry-server" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843494 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" containerName="oauth-openshift" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843506 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d71dd0-a5ff-4891-8801-03d66bb6994c" containerName="registry-server" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843520 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe066c30-021e-4a80-8541-148eec52dde8" containerName="registry-server" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843532 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e2c5b6e-1f12-4906-b2f8-303354595a04" containerName="registry-server" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.843542 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d235ac0-6891-411b-8d02-2333775dcb9a" containerName="registry-server" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.844018 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.849403 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.849719 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.849826 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.850056 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.852628 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.852648 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.852703 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.852776 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.852862 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.852952 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.853669 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.853718 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.853589 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-78558fc4d-bbbk8"] Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.867939 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.870546 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.876447 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.908447 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.908494 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-service-ca\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.908524 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-user-template-login\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.908552 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.908676 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.908738 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.908910 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-router-certs\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.908992 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blv7d\" (UniqueName: \"kubernetes.io/projected/4bd4de42-839d-480c-9a9d-6726e83be5d9-kube-api-access-blv7d\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.909103 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4bd4de42-839d-480c-9a9d-6726e83be5d9-audit-policies\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.909234 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-user-template-error\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.909302 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.909331 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-session\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.909355 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:39 crc kubenswrapper[4869]: I0127 09:57:39.909379 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4bd4de42-839d-480c-9a9d-6726e83be5d9-audit-dir\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.010470 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blv7d\" (UniqueName: \"kubernetes.io/projected/4bd4de42-839d-480c-9a9d-6726e83be5d9-kube-api-access-blv7d\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.010569 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4bd4de42-839d-480c-9a9d-6726e83be5d9-audit-policies\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.010616 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-user-template-error\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.010681 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.010719 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-session\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.011620 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4bd4de42-839d-480c-9a9d-6726e83be5d9-audit-dir\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.011669 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4bd4de42-839d-480c-9a9d-6726e83be5d9-audit-policies\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.011925 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4bd4de42-839d-480c-9a9d-6726e83be5d9-audit-dir\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.011971 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.011998 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.012037 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-service-ca\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.012079 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-user-template-login\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.012134 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.012174 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.012206 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.012243 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-router-certs\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.012777 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-service-ca\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.013328 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.013368 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.016449 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-session\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.016510 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-user-template-error\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.017275 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-user-template-login\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.018221 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.018357 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.019993 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.020058 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-router-certs\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.021946 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4bd4de42-839d-480c-9a9d-6726e83be5d9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.031418 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blv7d\" (UniqueName: \"kubernetes.io/projected/4bd4de42-839d-480c-9a9d-6726e83be5d9-kube-api-access-blv7d\") pod \"oauth-openshift-78558fc4d-bbbk8\" (UID: \"4bd4de42-839d-480c-9a9d-6726e83be5d9\") " pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.045928 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1cbbd0a-4425-4c44-a867-daaa6e90a6d3" path="/var/lib/kubelet/pods/a1cbbd0a-4425-4c44-a867-daaa6e90a6d3/volumes" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.181059 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:40 crc kubenswrapper[4869]: I0127 09:57:40.574727 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-78558fc4d-bbbk8"] Jan 27 09:57:41 crc kubenswrapper[4869]: I0127 09:57:41.353751 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" event={"ID":"4bd4de42-839d-480c-9a9d-6726e83be5d9","Type":"ContainerStarted","Data":"5fad47e826ed90f7382890c71487648603894fa990570cafdf35730345faa7c6"} Jan 27 09:57:41 crc kubenswrapper[4869]: I0127 09:57:41.353787 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" event={"ID":"4bd4de42-839d-480c-9a9d-6726e83be5d9","Type":"ContainerStarted","Data":"2dc77cf23edd7578f499ca9349ca0a5c75a48e8d0edf035059096e0cfdd136ee"} Jan 27 09:57:41 crc kubenswrapper[4869]: I0127 09:57:41.354221 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:41 crc kubenswrapper[4869]: I0127 09:57:41.360505 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" Jan 27 09:57:41 crc kubenswrapper[4869]: I0127 09:57:41.375104 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-78558fc4d-bbbk8" podStartSLOduration=29.375090995 podStartE2EDuration="29.375090995s" podCreationTimestamp="2026-01-27 09:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:57:41.373965227 +0000 UTC m=+229.994389320" watchObservedRunningTime="2026-01-27 09:57:41.375090995 +0000 UTC m=+229.995515078" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.540157 4869 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.540861 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8" gracePeriod=15 Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.540972 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058" gracePeriod=15 Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.540925 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4" gracePeriod=15 Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541012 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3" gracePeriod=15 Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541003 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458" gracePeriod=15 Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541406 4869 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 09:57:49 crc kubenswrapper[4869]: E0127 09:57:49.541589 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541601 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 09:57:49 crc kubenswrapper[4869]: E0127 09:57:49.541613 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541618 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 09:57:49 crc kubenswrapper[4869]: E0127 09:57:49.541629 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541635 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 09:57:49 crc kubenswrapper[4869]: E0127 09:57:49.541643 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541650 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 09:57:49 crc kubenswrapper[4869]: E0127 09:57:49.541662 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541668 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 09:57:49 crc kubenswrapper[4869]: E0127 09:57:49.541677 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541682 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 09:57:49 crc kubenswrapper[4869]: E0127 09:57:49.541690 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541698 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 09:57:49 crc kubenswrapper[4869]: E0127 09:57:49.541712 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541723 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541824 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541853 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541859 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541867 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541877 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541884 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.541894 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.543169 4869 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.543972 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.551244 4869 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.629683 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.629938 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.630086 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.630171 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.630281 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.630392 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.630474 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.630558 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.731960 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.732414 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.732493 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.732268 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.732669 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.732503 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.732779 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.732821 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.732894 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.732927 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.732958 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.733094 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.733115 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.733127 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.733153 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:57:49 crc kubenswrapper[4869]: I0127 09:57:49.733102 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:50 crc kubenswrapper[4869]: I0127 09:57:50.397443 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 09:57:50 crc kubenswrapper[4869]: I0127 09:57:50.398971 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 09:57:50 crc kubenswrapper[4869]: I0127 09:57:50.399815 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4" exitCode=0 Jan 27 09:57:50 crc kubenswrapper[4869]: I0127 09:57:50.399938 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058" exitCode=0 Jan 27 09:57:50 crc kubenswrapper[4869]: I0127 09:57:50.399889 4869 scope.go:117] "RemoveContainer" containerID="3a9bb62b291c1ed04ea7cfbf297e37b5dee9e9b06c0028d0367ff3aba814dfef" Jan 27 09:57:50 crc kubenswrapper[4869]: I0127 09:57:50.400165 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458" exitCode=0 Jan 27 09:57:50 crc kubenswrapper[4869]: I0127 09:57:50.400569 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3" exitCode=2 Jan 27 09:57:50 crc kubenswrapper[4869]: I0127 09:57:50.402162 4869 generic.go:334] "Generic (PLEG): container finished" podID="8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" containerID="0078a13767122b107cb3846ff83d3bcd52adee8aba2f0c31ac69b80976993ec4" exitCode=0 Jan 27 09:57:50 crc kubenswrapper[4869]: I0127 09:57:50.402249 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a","Type":"ContainerDied","Data":"0078a13767122b107cb3846ff83d3bcd52adee8aba2f0c31ac69b80976993ec4"} Jan 27 09:57:50 crc kubenswrapper[4869]: I0127 09:57:50.403019 4869 status_manager.go:851] "Failed to get status for pod" podUID="8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.410497 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.713269 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.714229 4869 status_manager.go:851] "Failed to get status for pod" podUID="8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:51 crc kubenswrapper[4869]: E0127 09:57:51.747116 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:51 crc kubenswrapper[4869]: E0127 09:57:51.747461 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:51 crc kubenswrapper[4869]: E0127 09:57:51.747665 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:51 crc kubenswrapper[4869]: E0127 09:57:51.748016 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:51 crc kubenswrapper[4869]: E0127 09:57:51.748459 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.748489 4869 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 09:57:51 crc kubenswrapper[4869]: E0127 09:57:51.748764 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="200ms" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.760177 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-kubelet-dir\") pod \"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a\" (UID: \"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a\") " Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.760249 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-kube-api-access\") pod \"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a\" (UID: \"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a\") " Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.760286 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-var-lock\") pod \"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a\" (UID: \"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a\") " Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.760332 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" (UID: "8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.760455 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-var-lock" (OuterVolumeSpecName: "var-lock") pod "8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" (UID: "8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.760595 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.760608 4869 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.766621 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" (UID: "8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.861421 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.923281 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.923941 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.924514 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.924937 4869 status_manager.go:851] "Failed to get status for pod" podUID="8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:51 crc kubenswrapper[4869]: E0127 09:57:51.949644 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="400ms" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.962621 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.962660 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.962727 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.962820 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.962823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.962870 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.963260 4869 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.963275 4869 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:51 crc kubenswrapper[4869]: I0127 09:57:51.963284 4869 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.037291 4869 status_manager.go:851] "Failed to get status for pod" podUID="8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.037770 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.039106 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 09:57:52 crc kubenswrapper[4869]: E0127 09:57:52.350494 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="800ms" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.419425 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8" exitCode=0 Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.419533 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.419529 4869 scope.go:117] "RemoveContainer" containerID="093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.420380 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.420942 4869 status_manager.go:851] "Failed to get status for pod" podUID="8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.421082 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.421080 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a","Type":"ContainerDied","Data":"5ec0f91d7b62d253969087b0b683068e905035711cde68cbf282ca4ec6e077ea"} Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.421110 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ec0f91d7b62d253969087b0b683068e905035711cde68cbf282ca4ec6e077ea" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.422503 4869 status_manager.go:851] "Failed to get status for pod" podUID="8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.422846 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.423613 4869 status_manager.go:851] "Failed to get status for pod" podUID="8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.423872 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.438587 4869 scope.go:117] "RemoveContainer" containerID="5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.449081 4869 scope.go:117] "RemoveContainer" containerID="573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.459931 4869 scope.go:117] "RemoveContainer" containerID="f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.478115 4869 scope.go:117] "RemoveContainer" containerID="50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.492992 4869 scope.go:117] "RemoveContainer" containerID="542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.513090 4869 scope.go:117] "RemoveContainer" containerID="093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4" Jan 27 09:57:52 crc kubenswrapper[4869]: E0127 09:57:52.513620 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\": container with ID starting with 093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4 not found: ID does not exist" containerID="093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.513651 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4"} err="failed to get container status \"093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\": rpc error: code = NotFound desc = could not find container \"093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4\": container with ID starting with 093c66f4610eb764539782505c97c8ec96981b9ebc52652ff9f572b4404b0aa4 not found: ID does not exist" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.513704 4869 scope.go:117] "RemoveContainer" containerID="5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058" Jan 27 09:57:52 crc kubenswrapper[4869]: E0127 09:57:52.514252 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\": container with ID starting with 5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058 not found: ID does not exist" containerID="5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.514296 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058"} err="failed to get container status \"5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\": rpc error: code = NotFound desc = could not find container \"5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058\": container with ID starting with 5598462d798b21596ec7771a965a901837939ae5044af6db736d825236fdb058 not found: ID does not exist" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.514329 4869 scope.go:117] "RemoveContainer" containerID="573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458" Jan 27 09:57:52 crc kubenswrapper[4869]: E0127 09:57:52.514564 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\": container with ID starting with 573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458 not found: ID does not exist" containerID="573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.514592 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458"} err="failed to get container status \"573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\": rpc error: code = NotFound desc = could not find container \"573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458\": container with ID starting with 573f8205e9d0368c3a4d3cf5f1dd5be27632fb9ce19b5d48eaeb2203a975b458 not found: ID does not exist" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.514607 4869 scope.go:117] "RemoveContainer" containerID="f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3" Jan 27 09:57:52 crc kubenswrapper[4869]: E0127 09:57:52.514995 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\": container with ID starting with f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3 not found: ID does not exist" containerID="f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.515017 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3"} err="failed to get container status \"f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\": rpc error: code = NotFound desc = could not find container \"f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3\": container with ID starting with f812984c6a64b29a1599241c9778ac714d8ec6d8349420e530ccd4dbe9c214b3 not found: ID does not exist" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.515031 4869 scope.go:117] "RemoveContainer" containerID="50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8" Jan 27 09:57:52 crc kubenswrapper[4869]: E0127 09:57:52.515369 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\": container with ID starting with 50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8 not found: ID does not exist" containerID="50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.515388 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8"} err="failed to get container status \"50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\": rpc error: code = NotFound desc = could not find container \"50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8\": container with ID starting with 50eee3182e5a50d5229b60dc215112fd1d604685935cb15521b33cf62035dbe8 not found: ID does not exist" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.515403 4869 scope.go:117] "RemoveContainer" containerID="542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4" Jan 27 09:57:52 crc kubenswrapper[4869]: E0127 09:57:52.515630 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\": container with ID starting with 542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4 not found: ID does not exist" containerID="542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4" Jan 27 09:57:52 crc kubenswrapper[4869]: I0127 09:57:52.515648 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4"} err="failed to get container status \"542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\": rpc error: code = NotFound desc = could not find container \"542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4\": container with ID starting with 542f46ffb53a9d75e603cca6784050810b63f68d4ade1d5938b54c4b221a81d4 not found: ID does not exist" Jan 27 09:57:53 crc kubenswrapper[4869]: E0127 09:57:53.151635 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="1.6s" Jan 27 09:57:54 crc kubenswrapper[4869]: E0127 09:57:54.582247 4869 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.50:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:54 crc kubenswrapper[4869]: I0127 09:57:54.582669 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:54 crc kubenswrapper[4869]: W0127 09:57:54.599404 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-c27d75810ce5062f7b30d959e918d316dd70a01dfd1fb68a09313b3d9f3fb154 WatchSource:0}: Error finding container c27d75810ce5062f7b30d959e918d316dd70a01dfd1fb68a09313b3d9f3fb154: Status 404 returned error can't find the container with id c27d75810ce5062f7b30d959e918d316dd70a01dfd1fb68a09313b3d9f3fb154 Jan 27 09:57:54 crc kubenswrapper[4869]: E0127 09:57:54.603004 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e8e0964e04e5c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 09:57:54.601455196 +0000 UTC m=+243.221879289,LastTimestamp:2026-01-27 09:57:54.601455196 +0000 UTC m=+243.221879289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 09:57:54 crc kubenswrapper[4869]: E0127 09:57:54.752329 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="3.2s" Jan 27 09:57:55 crc kubenswrapper[4869]: I0127 09:57:55.438239 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9c884022d94ddd30ea6a4c0fbcba8b98c759dd6d610b434cca5e29bcc4a07d49"} Jan 27 09:57:55 crc kubenswrapper[4869]: I0127 09:57:55.438567 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c27d75810ce5062f7b30d959e918d316dd70a01dfd1fb68a09313b3d9f3fb154"} Jan 27 09:57:55 crc kubenswrapper[4869]: E0127 09:57:55.439181 4869 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.50:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:57:55 crc kubenswrapper[4869]: I0127 09:57:55.439318 4869 status_manager.go:851] "Failed to get status for pod" podUID="8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:57:57 crc kubenswrapper[4869]: E0127 09:57:57.953930 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="6.4s" Jan 27 09:58:01 crc kubenswrapper[4869]: E0127 09:58:01.435622 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e8e0964e04e5c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 09:57:54.601455196 +0000 UTC m=+243.221879289,LastTimestamp:2026-01-27 09:57:54.601455196 +0000 UTC m=+243.221879289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 09:58:02 crc kubenswrapper[4869]: I0127 09:58:02.032224 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:58:02 crc kubenswrapper[4869]: I0127 09:58:02.036159 4869 status_manager.go:851] "Failed to get status for pod" podUID="8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:58:02 crc kubenswrapper[4869]: I0127 09:58:02.036511 4869 status_manager.go:851] "Failed to get status for pod" podUID="8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:58:02 crc kubenswrapper[4869]: I0127 09:58:02.051216 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55c32b0f-8923-45c7-8035-26900ba6048b" Jan 27 09:58:02 crc kubenswrapper[4869]: I0127 09:58:02.051253 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55c32b0f-8923-45c7-8035-26900ba6048b" Jan 27 09:58:02 crc kubenswrapper[4869]: E0127 09:58:02.051667 4869 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:58:02 crc kubenswrapper[4869]: I0127 09:58:02.052184 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:58:02 crc kubenswrapper[4869]: I0127 09:58:02.488882 4869 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="e64c0558d8c7f85c90df693fd82a60c0889d635d8861ee1ed6be2116997418bb" exitCode=0 Jan 27 09:58:02 crc kubenswrapper[4869]: I0127 09:58:02.489001 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"e64c0558d8c7f85c90df693fd82a60c0889d635d8861ee1ed6be2116997418bb"} Jan 27 09:58:02 crc kubenswrapper[4869]: I0127 09:58:02.489182 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2edbebc074cb2c78ee4b0eebc431672134db336b0809269a491651218fa3361d"} Jan 27 09:58:02 crc kubenswrapper[4869]: I0127 09:58:02.489463 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55c32b0f-8923-45c7-8035-26900ba6048b" Jan 27 09:58:02 crc kubenswrapper[4869]: I0127 09:58:02.489477 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55c32b0f-8923-45c7-8035-26900ba6048b" Jan 27 09:58:02 crc kubenswrapper[4869]: E0127 09:58:02.489875 4869 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:58:02 crc kubenswrapper[4869]: I0127 09:58:02.490615 4869 status_manager.go:851] "Failed to get status for pod" podUID="8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 27 09:58:03 crc kubenswrapper[4869]: I0127 09:58:03.505566 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d07d0d0822642039a36da603bba96edb64a6fac14e20cbcc78e0312acb17671a"} Jan 27 09:58:03 crc kubenswrapper[4869]: I0127 09:58:03.506016 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6d1e488bff4746851eb9196849af39e786e8adef5300b2b27a894497323bcf14"} Jan 27 09:58:03 crc kubenswrapper[4869]: I0127 09:58:03.506027 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cb3b4f77daa6b5af3ca8cf053059155c6d5f6aa9a391a4116d1eaf9f44805883"} Jan 27 09:58:03 crc kubenswrapper[4869]: I0127 09:58:03.506036 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cd3f6eb8ae65a43fd0f5061c27f797b2300be7e42fd1a3041f2949433bc418b2"} Jan 27 09:58:04 crc kubenswrapper[4869]: I0127 09:58:04.445249 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 27 09:58:04 crc kubenswrapper[4869]: I0127 09:58:04.445527 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 27 09:58:04 crc kubenswrapper[4869]: I0127 09:58:04.512317 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 09:58:04 crc kubenswrapper[4869]: I0127 09:58:04.512363 4869 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05" exitCode=1 Jan 27 09:58:04 crc kubenswrapper[4869]: I0127 09:58:04.512416 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05"} Jan 27 09:58:04 crc kubenswrapper[4869]: I0127 09:58:04.512875 4869 scope.go:117] "RemoveContainer" containerID="8b56765ec3f9de8093615d2c4b8bce20310339aa60ed9e67549a6ee59d963f05" Jan 27 09:58:04 crc kubenswrapper[4869]: I0127 09:58:04.518151 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f7877dcc6538e75f67424fd28d87d8a665fb8d686292df83a5294a5ad3fbb98c"} Jan 27 09:58:04 crc kubenswrapper[4869]: I0127 09:58:04.518379 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:58:04 crc kubenswrapper[4869]: I0127 09:58:04.518445 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55c32b0f-8923-45c7-8035-26900ba6048b" Jan 27 09:58:04 crc kubenswrapper[4869]: I0127 09:58:04.518471 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55c32b0f-8923-45c7-8035-26900ba6048b" Jan 27 09:58:05 crc kubenswrapper[4869]: I0127 09:58:05.527022 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 09:58:05 crc kubenswrapper[4869]: I0127 09:58:05.527313 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bbc9313d116130dc35cce003a79dc4967b11012f9cec61485cbb005f82843c92"} Jan 27 09:58:06 crc kubenswrapper[4869]: I0127 09:58:06.518627 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:58:06 crc kubenswrapper[4869]: I0127 09:58:06.524782 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:58:06 crc kubenswrapper[4869]: I0127 09:58:06.531076 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:58:07 crc kubenswrapper[4869]: I0127 09:58:07.052812 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:58:07 crc kubenswrapper[4869]: I0127 09:58:07.053103 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:58:07 crc kubenswrapper[4869]: I0127 09:58:07.062383 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:58:09 crc kubenswrapper[4869]: I0127 09:58:09.527171 4869 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:58:09 crc kubenswrapper[4869]: I0127 09:58:09.547932 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55c32b0f-8923-45c7-8035-26900ba6048b" Jan 27 09:58:09 crc kubenswrapper[4869]: I0127 09:58:09.547962 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55c32b0f-8923-45c7-8035-26900ba6048b" Jan 27 09:58:09 crc kubenswrapper[4869]: I0127 09:58:09.552401 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:58:10 crc kubenswrapper[4869]: I0127 09:58:10.554784 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log" Jan 27 09:58:10 crc kubenswrapper[4869]: I0127 09:58:10.557198 4869 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f7877dcc6538e75f67424fd28d87d8a665fb8d686292df83a5294a5ad3fbb98c" exitCode=255 Jan 27 09:58:10 crc kubenswrapper[4869]: I0127 09:58:10.557237 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f7877dcc6538e75f67424fd28d87d8a665fb8d686292df83a5294a5ad3fbb98c"} Jan 27 09:58:10 crc kubenswrapper[4869]: I0127 09:58:10.557479 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55c32b0f-8923-45c7-8035-26900ba6048b" Jan 27 09:58:10 crc kubenswrapper[4869]: I0127 09:58:10.557496 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55c32b0f-8923-45c7-8035-26900ba6048b" Jan 27 09:58:10 crc kubenswrapper[4869]: I0127 09:58:10.560940 4869 scope.go:117] "RemoveContainer" containerID="f7877dcc6538e75f67424fd28d87d8a665fb8d686292df83a5294a5ad3fbb98c" Jan 27 09:58:11 crc kubenswrapper[4869]: I0127 09:58:11.563244 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log" Jan 27 09:58:11 crc kubenswrapper[4869]: I0127 09:58:11.564982 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2f7fb2ddf955cec476b25f5dc27407bf2742858c4238ba4b00cbfd14ce712a82"} Jan 27 09:58:11 crc kubenswrapper[4869]: I0127 09:58:11.565289 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55c32b0f-8923-45c7-8035-26900ba6048b" Jan 27 09:58:11 crc kubenswrapper[4869]: I0127 09:58:11.565308 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55c32b0f-8923-45c7-8035-26900ba6048b" Jan 27 09:58:11 crc kubenswrapper[4869]: I0127 09:58:11.565465 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:58:12 crc kubenswrapper[4869]: I0127 09:58:12.054356 4869 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="071f9d83-72c5-4f23-bf1d-3336c38077e1" Jan 27 09:58:12 crc kubenswrapper[4869]: I0127 09:58:12.570937 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55c32b0f-8923-45c7-8035-26900ba6048b" Jan 27 09:58:12 crc kubenswrapper[4869]: I0127 09:58:12.570979 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55c32b0f-8923-45c7-8035-26900ba6048b" Jan 27 09:58:12 crc kubenswrapper[4869]: I0127 09:58:12.574003 4869 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="071f9d83-72c5-4f23-bf1d-3336c38077e1" Jan 27 09:58:14 crc kubenswrapper[4869]: I0127 09:58:14.448433 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 09:58:19 crc kubenswrapper[4869]: I0127 09:58:19.655637 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 09:58:19 crc kubenswrapper[4869]: I0127 09:58:19.974448 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 09:58:20 crc kubenswrapper[4869]: I0127 09:58:20.001207 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 09:58:20 crc kubenswrapper[4869]: I0127 09:58:20.005531 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 09:58:20 crc kubenswrapper[4869]: I0127 09:58:20.200471 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 09:58:20 crc kubenswrapper[4869]: I0127 09:58:20.419585 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 09:58:20 crc kubenswrapper[4869]: I0127 09:58:20.779192 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 09:58:21 crc kubenswrapper[4869]: I0127 09:58:21.326177 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 09:58:21 crc kubenswrapper[4869]: I0127 09:58:21.475606 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 09:58:21 crc kubenswrapper[4869]: I0127 09:58:21.574908 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 09:58:21 crc kubenswrapper[4869]: I0127 09:58:21.875405 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 09:58:21 crc kubenswrapper[4869]: I0127 09:58:21.959891 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 09:58:21 crc kubenswrapper[4869]: I0127 09:58:21.998591 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 09:58:22 crc kubenswrapper[4869]: I0127 09:58:22.126280 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 09:58:22 crc kubenswrapper[4869]: I0127 09:58:22.466647 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 09:58:22 crc kubenswrapper[4869]: I0127 09:58:22.699329 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 09:58:22 crc kubenswrapper[4869]: I0127 09:58:22.799712 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 09:58:22 crc kubenswrapper[4869]: I0127 09:58:22.911863 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 09:58:23 crc kubenswrapper[4869]: I0127 09:58:23.068014 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 09:58:23 crc kubenswrapper[4869]: I0127 09:58:23.135228 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 09:58:23 crc kubenswrapper[4869]: I0127 09:58:23.162100 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 09:58:23 crc kubenswrapper[4869]: I0127 09:58:23.183406 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 09:58:23 crc kubenswrapper[4869]: I0127 09:58:23.332579 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 09:58:23 crc kubenswrapper[4869]: I0127 09:58:23.355168 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 09:58:23 crc kubenswrapper[4869]: I0127 09:58:23.409135 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 09:58:23 crc kubenswrapper[4869]: I0127 09:58:23.416900 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 09:58:23 crc kubenswrapper[4869]: I0127 09:58:23.441826 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 09:58:23 crc kubenswrapper[4869]: I0127 09:58:23.463987 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 09:58:23 crc kubenswrapper[4869]: I0127 09:58:23.518897 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 09:58:23 crc kubenswrapper[4869]: I0127 09:58:23.672556 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 09:58:23 crc kubenswrapper[4869]: I0127 09:58:23.740768 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 09:58:23 crc kubenswrapper[4869]: I0127 09:58:23.782773 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 09:58:23 crc kubenswrapper[4869]: I0127 09:58:23.839692 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 09:58:23 crc kubenswrapper[4869]: I0127 09:58:23.880525 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 09:58:23 crc kubenswrapper[4869]: I0127 09:58:23.898333 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 09:58:24 crc kubenswrapper[4869]: I0127 09:58:24.152184 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 09:58:24 crc kubenswrapper[4869]: I0127 09:58:24.206353 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 09:58:24 crc kubenswrapper[4869]: I0127 09:58:24.408104 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 09:58:24 crc kubenswrapper[4869]: I0127 09:58:24.486420 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 09:58:24 crc kubenswrapper[4869]: I0127 09:58:24.510111 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 09:58:24 crc kubenswrapper[4869]: I0127 09:58:24.518974 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 09:58:24 crc kubenswrapper[4869]: I0127 09:58:24.588853 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 09:58:24 crc kubenswrapper[4869]: I0127 09:58:24.691231 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 09:58:24 crc kubenswrapper[4869]: I0127 09:58:24.732218 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.034461 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.046225 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.100152 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.104084 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.115814 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.159687 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.185999 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.255044 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.291889 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.318901 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.595526 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.628243 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.629481 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.635110 4869 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.670778 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.742611 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.858425 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.873526 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.876490 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.880884 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.908985 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.950801 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.956325 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 09:58:25 crc kubenswrapper[4869]: I0127 09:58:25.958278 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.056015 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.087121 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.154479 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.176282 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.197868 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.262130 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.308680 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.314148 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.342940 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.364788 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.404306 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.426548 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.470372 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.488149 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.497037 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.502564 4869 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.509482 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.565983 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.627702 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.651770 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.675017 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.692121 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.795145 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.796448 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.807486 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.849659 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.907369 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.932211 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.962038 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.964451 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.967301 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.972407 4869 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 09:58:26 crc kubenswrapper[4869]: I0127 09:58:26.996990 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.049160 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.080676 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.102621 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.199748 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.249081 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.347597 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.356719 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.371336 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.427088 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.438529 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.496942 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.552535 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.619591 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.665651 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.677033 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.683850 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.699325 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.728686 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.744904 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.798130 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.923511 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 09:58:27 crc kubenswrapper[4869]: I0127 09:58:27.988183 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.064730 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.198449 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.224027 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.235646 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.241016 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.245047 4869 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.321970 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.345260 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.619321 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.654623 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.669876 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.706644 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.765725 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.788773 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.832771 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.834149 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.932391 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 09:58:28 crc kubenswrapper[4869]: I0127 09:58:28.993297 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 09:58:29 crc kubenswrapper[4869]: I0127 09:58:29.071738 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 09:58:29 crc kubenswrapper[4869]: I0127 09:58:29.079089 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 09:58:29 crc kubenswrapper[4869]: I0127 09:58:29.096967 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 09:58:29 crc kubenswrapper[4869]: I0127 09:58:29.271612 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 09:58:29 crc kubenswrapper[4869]: I0127 09:58:29.315379 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 09:58:29 crc kubenswrapper[4869]: I0127 09:58:29.426048 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 09:58:29 crc kubenswrapper[4869]: I0127 09:58:29.631286 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 09:58:29 crc kubenswrapper[4869]: I0127 09:58:29.686136 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 09:58:29 crc kubenswrapper[4869]: I0127 09:58:29.744078 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 09:58:29 crc kubenswrapper[4869]: I0127 09:58:29.916150 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 09:58:30 crc kubenswrapper[4869]: I0127 09:58:30.036877 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 09:58:30 crc kubenswrapper[4869]: I0127 09:58:30.108472 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 09:58:30 crc kubenswrapper[4869]: I0127 09:58:30.111895 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 09:58:30 crc kubenswrapper[4869]: I0127 09:58:30.302675 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 09:58:30 crc kubenswrapper[4869]: I0127 09:58:30.306625 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 09:58:30 crc kubenswrapper[4869]: I0127 09:58:30.371744 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 09:58:30 crc kubenswrapper[4869]: I0127 09:58:30.407084 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 09:58:30 crc kubenswrapper[4869]: I0127 09:58:30.414493 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 09:58:30 crc kubenswrapper[4869]: I0127 09:58:30.416988 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 09:58:30 crc kubenswrapper[4869]: I0127 09:58:30.459086 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 09:58:30 crc kubenswrapper[4869]: I0127 09:58:30.493667 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 09:58:30 crc kubenswrapper[4869]: I0127 09:58:30.607102 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 09:58:30 crc kubenswrapper[4869]: I0127 09:58:30.715315 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 09:58:30 crc kubenswrapper[4869]: I0127 09:58:30.756132 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 09:58:30 crc kubenswrapper[4869]: I0127 09:58:30.789191 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 09:58:30 crc kubenswrapper[4869]: I0127 09:58:30.865358 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 09:58:30 crc kubenswrapper[4869]: I0127 09:58:30.931147 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 09:58:31 crc kubenswrapper[4869]: I0127 09:58:31.058791 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 09:58:31 crc kubenswrapper[4869]: I0127 09:58:31.106682 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 09:58:31 crc kubenswrapper[4869]: I0127 09:58:31.109158 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 09:58:31 crc kubenswrapper[4869]: I0127 09:58:31.202587 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 09:58:31 crc kubenswrapper[4869]: I0127 09:58:31.288507 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 09:58:31 crc kubenswrapper[4869]: I0127 09:58:31.384931 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 09:58:31 crc kubenswrapper[4869]: I0127 09:58:31.437203 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 09:58:31 crc kubenswrapper[4869]: I0127 09:58:31.475423 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 09:58:31 crc kubenswrapper[4869]: I0127 09:58:31.476974 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 09:58:31 crc kubenswrapper[4869]: I0127 09:58:31.486091 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 09:58:31 crc kubenswrapper[4869]: I0127 09:58:31.491240 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 09:58:31 crc kubenswrapper[4869]: I0127 09:58:31.581467 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 09:58:31 crc kubenswrapper[4869]: I0127 09:58:31.645955 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 09:58:31 crc kubenswrapper[4869]: I0127 09:58:31.679077 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 09:58:31 crc kubenswrapper[4869]: I0127 09:58:31.724784 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 09:58:31 crc kubenswrapper[4869]: I0127 09:58:31.736353 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 09:58:31 crc kubenswrapper[4869]: I0127 09:58:31.926950 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 09:58:32 crc kubenswrapper[4869]: I0127 09:58:32.028558 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 09:58:32 crc kubenswrapper[4869]: I0127 09:58:32.029421 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 09:58:32 crc kubenswrapper[4869]: I0127 09:58:32.125199 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 09:58:32 crc kubenswrapper[4869]: I0127 09:58:32.177058 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 09:58:32 crc kubenswrapper[4869]: I0127 09:58:32.193785 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 09:58:32 crc kubenswrapper[4869]: I0127 09:58:32.297246 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 09:58:32 crc kubenswrapper[4869]: I0127 09:58:32.418505 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 09:58:32 crc kubenswrapper[4869]: I0127 09:58:32.520022 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 09:58:32 crc kubenswrapper[4869]: I0127 09:58:32.572708 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 09:58:32 crc kubenswrapper[4869]: I0127 09:58:32.634200 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 09:58:32 crc kubenswrapper[4869]: I0127 09:58:32.674901 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 09:58:32 crc kubenswrapper[4869]: I0127 09:58:32.717549 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 09:58:32 crc kubenswrapper[4869]: I0127 09:58:32.738490 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 09:58:32 crc kubenswrapper[4869]: I0127 09:58:32.796498 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.044763 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.046627 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.065067 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.171517 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.194651 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.214364 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.214517 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.237612 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.326166 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.410454 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.425085 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.461730 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.541848 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.599566 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.670361 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.725375 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.791866 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.799634 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.811812 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.847284 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 09:58:33 crc kubenswrapper[4869]: I0127 09:58:33.922635 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 09:58:34 crc kubenswrapper[4869]: I0127 09:58:34.001690 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 09:58:34 crc kubenswrapper[4869]: I0127 09:58:34.013727 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 09:58:34 crc kubenswrapper[4869]: I0127 09:58:34.052173 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 09:58:34 crc kubenswrapper[4869]: I0127 09:58:34.066707 4869 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 09:58:34 crc kubenswrapper[4869]: I0127 09:58:34.202360 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 09:58:34 crc kubenswrapper[4869]: I0127 09:58:34.309628 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 09:58:34 crc kubenswrapper[4869]: I0127 09:58:34.489519 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 09:58:34 crc kubenswrapper[4869]: I0127 09:58:34.654205 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 09:58:34 crc kubenswrapper[4869]: I0127 09:58:34.862026 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 09:58:35 crc kubenswrapper[4869]: I0127 09:58:35.043703 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 09:58:35 crc kubenswrapper[4869]: I0127 09:58:35.067256 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 09:58:35 crc kubenswrapper[4869]: I0127 09:58:35.072577 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 09:58:35 crc kubenswrapper[4869]: I0127 09:58:35.097980 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 09:58:35 crc kubenswrapper[4869]: I0127 09:58:35.153504 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 09:58:35 crc kubenswrapper[4869]: I0127 09:58:35.242175 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 09:58:35 crc kubenswrapper[4869]: I0127 09:58:35.525541 4869 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 09:58:35 crc kubenswrapper[4869]: I0127 09:58:35.535946 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 09:58:35 crc kubenswrapper[4869]: I0127 09:58:35.536352 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 09:58:35 crc kubenswrapper[4869]: I0127 09:58:35.536694 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55c32b0f-8923-45c7-8035-26900ba6048b" Jan 27 09:58:35 crc kubenswrapper[4869]: I0127 09:58:35.536717 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="55c32b0f-8923-45c7-8035-26900ba6048b" Jan 27 09:58:35 crc kubenswrapper[4869]: I0127 09:58:35.540536 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 09:58:35 crc kubenswrapper[4869]: I0127 09:58:35.558024 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.558008051 podStartE2EDuration="26.558008051s" podCreationTimestamp="2026-01-27 09:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:58:35.555677724 +0000 UTC m=+284.176101817" watchObservedRunningTime="2026-01-27 09:58:35.558008051 +0000 UTC m=+284.178432134" Jan 27 09:58:36 crc kubenswrapper[4869]: I0127 09:58:36.059461 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 09:58:36 crc kubenswrapper[4869]: I0127 09:58:36.219892 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 09:58:36 crc kubenswrapper[4869]: I0127 09:58:36.329694 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 09:58:36 crc kubenswrapper[4869]: I0127 09:58:36.372352 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 09:58:36 crc kubenswrapper[4869]: I0127 09:58:36.460604 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 09:58:36 crc kubenswrapper[4869]: I0127 09:58:36.686685 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 09:58:36 crc kubenswrapper[4869]: I0127 09:58:36.741895 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 09:58:36 crc kubenswrapper[4869]: I0127 09:58:36.925149 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 09:58:37 crc kubenswrapper[4869]: I0127 09:58:37.531989 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 09:58:37 crc kubenswrapper[4869]: I0127 09:58:37.676767 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 09:58:37 crc kubenswrapper[4869]: I0127 09:58:37.708451 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 09:58:37 crc kubenswrapper[4869]: I0127 09:58:37.905946 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 09:58:37 crc kubenswrapper[4869]: I0127 09:58:37.927543 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 09:58:38 crc kubenswrapper[4869]: I0127 09:58:38.599094 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 09:58:38 crc kubenswrapper[4869]: I0127 09:58:38.667675 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 09:58:38 crc kubenswrapper[4869]: I0127 09:58:38.803605 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 09:58:43 crc kubenswrapper[4869]: I0127 09:58:43.170909 4869 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 09:58:43 crc kubenswrapper[4869]: I0127 09:58:43.171716 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://9c884022d94ddd30ea6a4c0fbcba8b98c759dd6d610b434cca5e29bcc4a07d49" gracePeriod=5 Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.413751 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kgzqt"] Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.415641 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kgzqt" podUID="75088e3e-820e-444a-b9d1-ed7be4c7bbad" containerName="registry-server" containerID="cri-o://faf10e488cc5654ed22011cc18359c8077725f40fcbdc7cc37efffaed295efd0" gracePeriod=30 Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.419896 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b8njf"] Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.420183 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b8njf" podUID="95593b9c-39c7-40b7-aadc-4b8292206b30" containerName="registry-server" containerID="cri-o://7ed50e5055fa75d1fea4ccd50903eba5600c2842b650663bd6f1fdeb14cd4e50" gracePeriod=30 Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.434187 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kntj"] Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.434432 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" podUID="a574e648-77e2-46a1-a2ad-af18e6e9ad57" containerName="marketplace-operator" containerID="cri-o://c02354e1d8c82c0e29fc61cd13cb2b5a2b24887e5683dd787db90ef951e2a2d5" gracePeriod=30 Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.443860 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qz25t"] Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.444157 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qz25t" podUID="deb3e386-81b3-48d9-ba20-8a27ea09d026" containerName="registry-server" containerID="cri-o://7a4ad4aea4cc82910318263e0ca1267c7abe58bdb1c27914c0a62fedfe86d35f" gracePeriod=30 Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.448194 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lb57z"] Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.448435 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lb57z" podUID="79eecc44-f04a-43b0-ae75-84843aa45574" containerName="registry-server" containerID="cri-o://776984e663ab8849e9a964d4a9ad7b434cb081cb9a01123e558237789cc93903" gracePeriod=30 Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.465405 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8l8tx"] Jan 27 09:58:44 crc kubenswrapper[4869]: E0127 09:58:44.465665 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.465684 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 09:58:44 crc kubenswrapper[4869]: E0127 09:58:44.465709 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" containerName="installer" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.465717 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" containerName="installer" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.465820 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.465854 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8db33249-2c9b-4dbd-8e0c-3d7949bf2a3a" containerName="installer" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.466353 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8l8tx" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.485322 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8l8tx"] Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.551598 4869 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-6kntj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.551655 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" podUID="a574e648-77e2-46a1-a2ad-af18e6e9ad57" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.591506 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddd95684-409f-4d98-8974-55d5374ee6ba-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8l8tx\" (UID: \"ddd95684-409f-4d98-8974-55d5374ee6ba\") " pod="openshift-marketplace/marketplace-operator-79b997595-8l8tx" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.591567 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ddd95684-409f-4d98-8974-55d5374ee6ba-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8l8tx\" (UID: \"ddd95684-409f-4d98-8974-55d5374ee6ba\") " pod="openshift-marketplace/marketplace-operator-79b997595-8l8tx" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.591680 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnkwh\" (UniqueName: \"kubernetes.io/projected/ddd95684-409f-4d98-8974-55d5374ee6ba-kube-api-access-dnkwh\") pod \"marketplace-operator-79b997595-8l8tx\" (UID: \"ddd95684-409f-4d98-8974-55d5374ee6ba\") " pod="openshift-marketplace/marketplace-operator-79b997595-8l8tx" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.693099 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnkwh\" (UniqueName: \"kubernetes.io/projected/ddd95684-409f-4d98-8974-55d5374ee6ba-kube-api-access-dnkwh\") pod \"marketplace-operator-79b997595-8l8tx\" (UID: \"ddd95684-409f-4d98-8974-55d5374ee6ba\") " pod="openshift-marketplace/marketplace-operator-79b997595-8l8tx" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.693434 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddd95684-409f-4d98-8974-55d5374ee6ba-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8l8tx\" (UID: \"ddd95684-409f-4d98-8974-55d5374ee6ba\") " pod="openshift-marketplace/marketplace-operator-79b997595-8l8tx" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.693461 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ddd95684-409f-4d98-8974-55d5374ee6ba-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8l8tx\" (UID: \"ddd95684-409f-4d98-8974-55d5374ee6ba\") " pod="openshift-marketplace/marketplace-operator-79b997595-8l8tx" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.695064 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddd95684-409f-4d98-8974-55d5374ee6ba-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8l8tx\" (UID: \"ddd95684-409f-4d98-8974-55d5374ee6ba\") " pod="openshift-marketplace/marketplace-operator-79b997595-8l8tx" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.711932 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnkwh\" (UniqueName: \"kubernetes.io/projected/ddd95684-409f-4d98-8974-55d5374ee6ba-kube-api-access-dnkwh\") pod \"marketplace-operator-79b997595-8l8tx\" (UID: \"ddd95684-409f-4d98-8974-55d5374ee6ba\") " pod="openshift-marketplace/marketplace-operator-79b997595-8l8tx" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.714959 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ddd95684-409f-4d98-8974-55d5374ee6ba-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8l8tx\" (UID: \"ddd95684-409f-4d98-8974-55d5374ee6ba\") " pod="openshift-marketplace/marketplace-operator-79b997595-8l8tx" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.782747 4869 generic.go:334] "Generic (PLEG): container finished" podID="75088e3e-820e-444a-b9d1-ed7be4c7bbad" containerID="faf10e488cc5654ed22011cc18359c8077725f40fcbdc7cc37efffaed295efd0" exitCode=0 Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.782853 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgzqt" event={"ID":"75088e3e-820e-444a-b9d1-ed7be4c7bbad","Type":"ContainerDied","Data":"faf10e488cc5654ed22011cc18359c8077725f40fcbdc7cc37efffaed295efd0"} Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.782889 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kgzqt" event={"ID":"75088e3e-820e-444a-b9d1-ed7be4c7bbad","Type":"ContainerDied","Data":"734d4fc532ec51731078e6ad3b9fd0feb6bc87d6cc54b220c64c9565705e8078"} Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.782906 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="734d4fc532ec51731078e6ad3b9fd0feb6bc87d6cc54b220c64c9565705e8078" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.784419 4869 generic.go:334] "Generic (PLEG): container finished" podID="a574e648-77e2-46a1-a2ad-af18e6e9ad57" containerID="c02354e1d8c82c0e29fc61cd13cb2b5a2b24887e5683dd787db90ef951e2a2d5" exitCode=0 Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.784567 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" event={"ID":"a574e648-77e2-46a1-a2ad-af18e6e9ad57","Type":"ContainerDied","Data":"c02354e1d8c82c0e29fc61cd13cb2b5a2b24887e5683dd787db90ef951e2a2d5"} Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.787478 4869 generic.go:334] "Generic (PLEG): container finished" podID="deb3e386-81b3-48d9-ba20-8a27ea09d026" containerID="7a4ad4aea4cc82910318263e0ca1267c7abe58bdb1c27914c0a62fedfe86d35f" exitCode=0 Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.787525 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz25t" event={"ID":"deb3e386-81b3-48d9-ba20-8a27ea09d026","Type":"ContainerDied","Data":"7a4ad4aea4cc82910318263e0ca1267c7abe58bdb1c27914c0a62fedfe86d35f"} Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.789992 4869 generic.go:334] "Generic (PLEG): container finished" podID="79eecc44-f04a-43b0-ae75-84843aa45574" containerID="776984e663ab8849e9a964d4a9ad7b434cb081cb9a01123e558237789cc93903" exitCode=0 Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.790054 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb57z" event={"ID":"79eecc44-f04a-43b0-ae75-84843aa45574","Type":"ContainerDied","Data":"776984e663ab8849e9a964d4a9ad7b434cb081cb9a01123e558237789cc93903"} Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.793060 4869 generic.go:334] "Generic (PLEG): container finished" podID="95593b9c-39c7-40b7-aadc-4b8292206b30" containerID="7ed50e5055fa75d1fea4ccd50903eba5600c2842b650663bd6f1fdeb14cd4e50" exitCode=0 Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.793108 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8njf" event={"ID":"95593b9c-39c7-40b7-aadc-4b8292206b30","Type":"ContainerDied","Data":"7ed50e5055fa75d1fea4ccd50903eba5600c2842b650663bd6f1fdeb14cd4e50"} Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.893181 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8l8tx" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.897006 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kgzqt" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.906074 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.925410 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8njf" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.927508 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb57z" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.928760 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qz25t" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.995743 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a574e648-77e2-46a1-a2ad-af18e6e9ad57-marketplace-operator-metrics\") pod \"a574e648-77e2-46a1-a2ad-af18e6e9ad57\" (UID: \"a574e648-77e2-46a1-a2ad-af18e6e9ad57\") " Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.995790 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95593b9c-39c7-40b7-aadc-4b8292206b30-utilities\") pod \"95593b9c-39c7-40b7-aadc-4b8292206b30\" (UID: \"95593b9c-39c7-40b7-aadc-4b8292206b30\") " Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.995986 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a574e648-77e2-46a1-a2ad-af18e6e9ad57-marketplace-trusted-ca\") pod \"a574e648-77e2-46a1-a2ad-af18e6e9ad57\" (UID: \"a574e648-77e2-46a1-a2ad-af18e6e9ad57\") " Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.996650 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qntpd\" (UniqueName: \"kubernetes.io/projected/79eecc44-f04a-43b0-ae75-84843aa45574-kube-api-access-qntpd\") pod \"79eecc44-f04a-43b0-ae75-84843aa45574\" (UID: \"79eecc44-f04a-43b0-ae75-84843aa45574\") " Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.996760 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79eecc44-f04a-43b0-ae75-84843aa45574-utilities\") pod \"79eecc44-f04a-43b0-ae75-84843aa45574\" (UID: \"79eecc44-f04a-43b0-ae75-84843aa45574\") " Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.996785 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb3e386-81b3-48d9-ba20-8a27ea09d026-catalog-content\") pod \"deb3e386-81b3-48d9-ba20-8a27ea09d026\" (UID: \"deb3e386-81b3-48d9-ba20-8a27ea09d026\") " Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.996815 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsdbj\" (UniqueName: \"kubernetes.io/projected/a574e648-77e2-46a1-a2ad-af18e6e9ad57-kube-api-access-lsdbj\") pod \"a574e648-77e2-46a1-a2ad-af18e6e9ad57\" (UID: \"a574e648-77e2-46a1-a2ad-af18e6e9ad57\") " Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.996864 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75088e3e-820e-444a-b9d1-ed7be4c7bbad-utilities\") pod \"75088e3e-820e-444a-b9d1-ed7be4c7bbad\" (UID: \"75088e3e-820e-444a-b9d1-ed7be4c7bbad\") " Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.996893 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6lpg\" (UniqueName: \"kubernetes.io/projected/75088e3e-820e-444a-b9d1-ed7be4c7bbad-kube-api-access-v6lpg\") pod \"75088e3e-820e-444a-b9d1-ed7be4c7bbad\" (UID: \"75088e3e-820e-444a-b9d1-ed7be4c7bbad\") " Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.996913 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb3e386-81b3-48d9-ba20-8a27ea09d026-utilities\") pod \"deb3e386-81b3-48d9-ba20-8a27ea09d026\" (UID: \"deb3e386-81b3-48d9-ba20-8a27ea09d026\") " Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.996939 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw7c9\" (UniqueName: \"kubernetes.io/projected/deb3e386-81b3-48d9-ba20-8a27ea09d026-kube-api-access-lw7c9\") pod \"deb3e386-81b3-48d9-ba20-8a27ea09d026\" (UID: \"deb3e386-81b3-48d9-ba20-8a27ea09d026\") " Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.996988 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95593b9c-39c7-40b7-aadc-4b8292206b30-catalog-content\") pod \"95593b9c-39c7-40b7-aadc-4b8292206b30\" (UID: \"95593b9c-39c7-40b7-aadc-4b8292206b30\") " Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.997016 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75088e3e-820e-444a-b9d1-ed7be4c7bbad-catalog-content\") pod \"75088e3e-820e-444a-b9d1-ed7be4c7bbad\" (UID: \"75088e3e-820e-444a-b9d1-ed7be4c7bbad\") " Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.997079 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95593b9c-39c7-40b7-aadc-4b8292206b30-utilities" (OuterVolumeSpecName: "utilities") pod "95593b9c-39c7-40b7-aadc-4b8292206b30" (UID: "95593b9c-39c7-40b7-aadc-4b8292206b30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.997208 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a574e648-77e2-46a1-a2ad-af18e6e9ad57-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "a574e648-77e2-46a1-a2ad-af18e6e9ad57" (UID: "a574e648-77e2-46a1-a2ad-af18e6e9ad57"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.997352 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79eecc44-f04a-43b0-ae75-84843aa45574-catalog-content\") pod \"79eecc44-f04a-43b0-ae75-84843aa45574\" (UID: \"79eecc44-f04a-43b0-ae75-84843aa45574\") " Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.997395 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp5tr\" (UniqueName: \"kubernetes.io/projected/95593b9c-39c7-40b7-aadc-4b8292206b30-kube-api-access-vp5tr\") pod \"95593b9c-39c7-40b7-aadc-4b8292206b30\" (UID: \"95593b9c-39c7-40b7-aadc-4b8292206b30\") " Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.997844 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79eecc44-f04a-43b0-ae75-84843aa45574-utilities" (OuterVolumeSpecName: "utilities") pod "79eecc44-f04a-43b0-ae75-84843aa45574" (UID: "79eecc44-f04a-43b0-ae75-84843aa45574"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.998186 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75088e3e-820e-444a-b9d1-ed7be4c7bbad-utilities" (OuterVolumeSpecName: "utilities") pod "75088e3e-820e-444a-b9d1-ed7be4c7bbad" (UID: "75088e3e-820e-444a-b9d1-ed7be4c7bbad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.999650 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95593b9c-39c7-40b7-aadc-4b8292206b30-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.999667 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a574e648-77e2-46a1-a2ad-af18e6e9ad57-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.999678 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79eecc44-f04a-43b0-ae75-84843aa45574-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:44 crc kubenswrapper[4869]: I0127 09:58:44.999687 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75088e3e-820e-444a-b9d1-ed7be4c7bbad-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.001860 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb3e386-81b3-48d9-ba20-8a27ea09d026-utilities" (OuterVolumeSpecName: "utilities") pod "deb3e386-81b3-48d9-ba20-8a27ea09d026" (UID: "deb3e386-81b3-48d9-ba20-8a27ea09d026"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.002050 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75088e3e-820e-444a-b9d1-ed7be4c7bbad-kube-api-access-v6lpg" (OuterVolumeSpecName: "kube-api-access-v6lpg") pod "75088e3e-820e-444a-b9d1-ed7be4c7bbad" (UID: "75088e3e-820e-444a-b9d1-ed7be4c7bbad"). InnerVolumeSpecName "kube-api-access-v6lpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.002518 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb3e386-81b3-48d9-ba20-8a27ea09d026-kube-api-access-lw7c9" (OuterVolumeSpecName: "kube-api-access-lw7c9") pod "deb3e386-81b3-48d9-ba20-8a27ea09d026" (UID: "deb3e386-81b3-48d9-ba20-8a27ea09d026"). InnerVolumeSpecName "kube-api-access-lw7c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.002534 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95593b9c-39c7-40b7-aadc-4b8292206b30-kube-api-access-vp5tr" (OuterVolumeSpecName: "kube-api-access-vp5tr") pod "95593b9c-39c7-40b7-aadc-4b8292206b30" (UID: "95593b9c-39c7-40b7-aadc-4b8292206b30"). InnerVolumeSpecName "kube-api-access-vp5tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.002918 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a574e648-77e2-46a1-a2ad-af18e6e9ad57-kube-api-access-lsdbj" (OuterVolumeSpecName: "kube-api-access-lsdbj") pod "a574e648-77e2-46a1-a2ad-af18e6e9ad57" (UID: "a574e648-77e2-46a1-a2ad-af18e6e9ad57"). InnerVolumeSpecName "kube-api-access-lsdbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.005362 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79eecc44-f04a-43b0-ae75-84843aa45574-kube-api-access-qntpd" (OuterVolumeSpecName: "kube-api-access-qntpd") pod "79eecc44-f04a-43b0-ae75-84843aa45574" (UID: "79eecc44-f04a-43b0-ae75-84843aa45574"). InnerVolumeSpecName "kube-api-access-qntpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.015109 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a574e648-77e2-46a1-a2ad-af18e6e9ad57-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "a574e648-77e2-46a1-a2ad-af18e6e9ad57" (UID: "a574e648-77e2-46a1-a2ad-af18e6e9ad57"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.031748 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb3e386-81b3-48d9-ba20-8a27ea09d026-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "deb3e386-81b3-48d9-ba20-8a27ea09d026" (UID: "deb3e386-81b3-48d9-ba20-8a27ea09d026"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.066567 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95593b9c-39c7-40b7-aadc-4b8292206b30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95593b9c-39c7-40b7-aadc-4b8292206b30" (UID: "95593b9c-39c7-40b7-aadc-4b8292206b30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.067338 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75088e3e-820e-444a-b9d1-ed7be4c7bbad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75088e3e-820e-444a-b9d1-ed7be4c7bbad" (UID: "75088e3e-820e-444a-b9d1-ed7be4c7bbad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.102725 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb3e386-81b3-48d9-ba20-8a27ea09d026-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.102776 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsdbj\" (UniqueName: \"kubernetes.io/projected/a574e648-77e2-46a1-a2ad-af18e6e9ad57-kube-api-access-lsdbj\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.102799 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6lpg\" (UniqueName: \"kubernetes.io/projected/75088e3e-820e-444a-b9d1-ed7be4c7bbad-kube-api-access-v6lpg\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.102816 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb3e386-81b3-48d9-ba20-8a27ea09d026-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.102855 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw7c9\" (UniqueName: \"kubernetes.io/projected/deb3e386-81b3-48d9-ba20-8a27ea09d026-kube-api-access-lw7c9\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.102872 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95593b9c-39c7-40b7-aadc-4b8292206b30-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.102888 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75088e3e-820e-444a-b9d1-ed7be4c7bbad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.102904 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp5tr\" (UniqueName: \"kubernetes.io/projected/95593b9c-39c7-40b7-aadc-4b8292206b30-kube-api-access-vp5tr\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.102921 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a574e648-77e2-46a1-a2ad-af18e6e9ad57-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.102956 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qntpd\" (UniqueName: \"kubernetes.io/projected/79eecc44-f04a-43b0-ae75-84843aa45574-kube-api-access-qntpd\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.143157 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79eecc44-f04a-43b0-ae75-84843aa45574-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79eecc44-f04a-43b0-ae75-84843aa45574" (UID: "79eecc44-f04a-43b0-ae75-84843aa45574"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.204274 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79eecc44-f04a-43b0-ae75-84843aa45574-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.334716 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8l8tx"] Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.800537 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz25t" event={"ID":"deb3e386-81b3-48d9-ba20-8a27ea09d026","Type":"ContainerDied","Data":"41e3390683c65e22040730d8a629d7e316ccfa8a540212031d3af8041b2806b0"} Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.800582 4869 scope.go:117] "RemoveContainer" containerID="7a4ad4aea4cc82910318263e0ca1267c7abe58bdb1c27914c0a62fedfe86d35f" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.800646 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qz25t" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.803994 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lb57z" event={"ID":"79eecc44-f04a-43b0-ae75-84843aa45574","Type":"ContainerDied","Data":"bbd83b88950964fa96a6f63360541f10b1ee03b3113f1bfcdbfa7ad1339229fa"} Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.804090 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lb57z" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.805543 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8l8tx" event={"ID":"ddd95684-409f-4d98-8974-55d5374ee6ba","Type":"ContainerStarted","Data":"c7af21a458342e7ed7c7e19f3e1f8dca10947874aa23471d78cc36a16be6cf5a"} Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.805566 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8l8tx" event={"ID":"ddd95684-409f-4d98-8974-55d5374ee6ba","Type":"ContainerStarted","Data":"7621f7dafa4150ca346151162b83c4d4fc3257a405002a4c73c8e9b582c9ca90"} Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.806206 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8l8tx" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.818932 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b8njf" event={"ID":"95593b9c-39c7-40b7-aadc-4b8292206b30","Type":"ContainerDied","Data":"cc80796aaf88d442f54255e84be329dd8caec807374fb61e456ad2b43f7f5aef"} Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.819003 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b8njf" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.822323 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.823004 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6kntj" event={"ID":"a574e648-77e2-46a1-a2ad-af18e6e9ad57","Type":"ContainerDied","Data":"8412deb0808bc5033b570d687e3c461688c519c51fbb63f5529421b7755fcdaa"} Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.823223 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kgzqt" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.829212 4869 scope.go:117] "RemoveContainer" containerID="e8b730a1b112b6b4e05da871aefaf0510e0ecdeceecf4a900286bd73a1cf53fd" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.833971 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8l8tx" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.849409 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8l8tx" podStartSLOduration=1.8493858429999999 podStartE2EDuration="1.849385843s" podCreationTimestamp="2026-01-27 09:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:58:45.837864416 +0000 UTC m=+294.458288569" watchObservedRunningTime="2026-01-27 09:58:45.849385843 +0000 UTC m=+294.469809936" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.866022 4869 scope.go:117] "RemoveContainer" containerID="9cf620e7843111d4d81d7a25ae99ec547626736658dac79385c3275ab9ce7309" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.894152 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lb57z"] Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.898406 4869 scope.go:117] "RemoveContainer" containerID="776984e663ab8849e9a964d4a9ad7b434cb081cb9a01123e558237789cc93903" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.909531 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lb57z"] Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.914634 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kntj"] Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.924896 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6kntj"] Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.927434 4869 scope.go:117] "RemoveContainer" containerID="caf88914c59daf47ba44a4e0941a46d4f382a2e3035b31cf575d47287ce5b18f" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.932099 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kgzqt"] Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.935819 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kgzqt"] Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.939504 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b8njf"] Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.943522 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b8njf"] Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.944892 4869 scope.go:117] "RemoveContainer" containerID="b362246e5b26f5a3c352101a924ba895780d601834dad5eaa105b9c82f27a1fb" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.947628 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qz25t"] Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.950440 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qz25t"] Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.958936 4869 scope.go:117] "RemoveContainer" containerID="7ed50e5055fa75d1fea4ccd50903eba5600c2842b650663bd6f1fdeb14cd4e50" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.969706 4869 scope.go:117] "RemoveContainer" containerID="b03ea17ec7836c693659a6962da9f3e53567a7019d7f212407e60a8f3e8c63dd" Jan 27 09:58:45 crc kubenswrapper[4869]: I0127 09:58:45.986714 4869 scope.go:117] "RemoveContainer" containerID="cf87f3058e957a5b643bc532c799f1fa2f6a2e63a835f96f8dbdcb4564d4affd" Jan 27 09:58:46 crc kubenswrapper[4869]: I0127 09:58:46.008581 4869 scope.go:117] "RemoveContainer" containerID="c02354e1d8c82c0e29fc61cd13cb2b5a2b24887e5683dd787db90ef951e2a2d5" Jan 27 09:58:46 crc kubenswrapper[4869]: I0127 09:58:46.040354 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75088e3e-820e-444a-b9d1-ed7be4c7bbad" path="/var/lib/kubelet/pods/75088e3e-820e-444a-b9d1-ed7be4c7bbad/volumes" Jan 27 09:58:46 crc kubenswrapper[4869]: I0127 09:58:46.040963 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79eecc44-f04a-43b0-ae75-84843aa45574" path="/var/lib/kubelet/pods/79eecc44-f04a-43b0-ae75-84843aa45574/volumes" Jan 27 09:58:46 crc kubenswrapper[4869]: I0127 09:58:46.041541 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95593b9c-39c7-40b7-aadc-4b8292206b30" path="/var/lib/kubelet/pods/95593b9c-39c7-40b7-aadc-4b8292206b30/volumes" Jan 27 09:58:46 crc kubenswrapper[4869]: I0127 09:58:46.042508 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a574e648-77e2-46a1-a2ad-af18e6e9ad57" path="/var/lib/kubelet/pods/a574e648-77e2-46a1-a2ad-af18e6e9ad57/volumes" Jan 27 09:58:46 crc kubenswrapper[4869]: I0127 09:58:46.042933 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb3e386-81b3-48d9-ba20-8a27ea09d026" path="/var/lib/kubelet/pods/deb3e386-81b3-48d9-ba20-8a27ea09d026/volumes" Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.740156 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.740498 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.845778 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.845853 4869 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="9c884022d94ddd30ea6a4c0fbcba8b98c759dd6d610b434cca5e29bcc4a07d49" exitCode=137 Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.845931 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.845940 4869 scope.go:117] "RemoveContainer" containerID="9c884022d94ddd30ea6a4c0fbcba8b98c759dd6d610b434cca5e29bcc4a07d49" Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.858371 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.858428 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.858454 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.858492 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.858534 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.858560 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.858591 4869 scope.go:117] "RemoveContainer" containerID="9c884022d94ddd30ea6a4c0fbcba8b98c759dd6d610b434cca5e29bcc4a07d49" Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.858665 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.858694 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.858773 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.859030 4869 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.859047 4869 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.859059 4869 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.859071 4869 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:48 crc kubenswrapper[4869]: E0127 09:58:48.859065 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c884022d94ddd30ea6a4c0fbcba8b98c759dd6d610b434cca5e29bcc4a07d49\": container with ID starting with 9c884022d94ddd30ea6a4c0fbcba8b98c759dd6d610b434cca5e29bcc4a07d49 not found: ID does not exist" containerID="9c884022d94ddd30ea6a4c0fbcba8b98c759dd6d610b434cca5e29bcc4a07d49" Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.859106 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c884022d94ddd30ea6a4c0fbcba8b98c759dd6d610b434cca5e29bcc4a07d49"} err="failed to get container status \"9c884022d94ddd30ea6a4c0fbcba8b98c759dd6d610b434cca5e29bcc4a07d49\": rpc error: code = NotFound desc = could not find container \"9c884022d94ddd30ea6a4c0fbcba8b98c759dd6d610b434cca5e29bcc4a07d49\": container with ID starting with 9c884022d94ddd30ea6a4c0fbcba8b98c759dd6d610b434cca5e29bcc4a07d49 not found: ID does not exist" Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.864962 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 09:58:48 crc kubenswrapper[4869]: I0127 09:58:48.960612 4869 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 09:58:50 crc kubenswrapper[4869]: I0127 09:58:50.040273 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 09:58:51 crc kubenswrapper[4869]: I0127 09:58:51.833189 4869 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.458352 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4sqz8"] Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.460187 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" podUID="6951bfc9-9908-4404-9000-cc243c35a314" containerName="controller-manager" containerID="cri-o://9942e9717e18a890f3a560a96f2d6d4a4b791a8c0a5bdfacd39ada94554f8c12" gracePeriod=30 Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.574949 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz"] Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.575378 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" podUID="17cbc9af-17b4-4815-b527-9d9d9c5112fc" containerName="route-controller-manager" containerID="cri-o://1091c6e1be645b0352ee4de7554eb1fb9b1396b9c0c4ff8ff83edb7a7be5d3dc" gracePeriod=30 Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.828120 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.896713 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.985914 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-proxy-ca-bundles\") pod \"6951bfc9-9908-4404-9000-cc243c35a314\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.986251 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6951bfc9-9908-4404-9000-cc243c35a314-serving-cert\") pod \"6951bfc9-9908-4404-9000-cc243c35a314\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.986284 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17cbc9af-17b4-4815-b527-9d9d9c5112fc-config\") pod \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\" (UID: \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\") " Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.986306 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17cbc9af-17b4-4815-b527-9d9d9c5112fc-client-ca\") pod \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\" (UID: \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\") " Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.986332 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7g4f\" (UniqueName: \"kubernetes.io/projected/6951bfc9-9908-4404-9000-cc243c35a314-kube-api-access-f7g4f\") pod \"6951bfc9-9908-4404-9000-cc243c35a314\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.986367 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-client-ca\") pod \"6951bfc9-9908-4404-9000-cc243c35a314\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.986389 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-config\") pod \"6951bfc9-9908-4404-9000-cc243c35a314\" (UID: \"6951bfc9-9908-4404-9000-cc243c35a314\") " Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.986421 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnlgk\" (UniqueName: \"kubernetes.io/projected/17cbc9af-17b4-4815-b527-9d9d9c5112fc-kube-api-access-hnlgk\") pod \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\" (UID: \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\") " Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.986454 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17cbc9af-17b4-4815-b527-9d9d9c5112fc-serving-cert\") pod \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\" (UID: \"17cbc9af-17b4-4815-b527-9d9d9c5112fc\") " Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.986793 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6951bfc9-9908-4404-9000-cc243c35a314" (UID: "6951bfc9-9908-4404-9000-cc243c35a314"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.987130 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-client-ca" (OuterVolumeSpecName: "client-ca") pod "6951bfc9-9908-4404-9000-cc243c35a314" (UID: "6951bfc9-9908-4404-9000-cc243c35a314"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.987264 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17cbc9af-17b4-4815-b527-9d9d9c5112fc-config" (OuterVolumeSpecName: "config") pod "17cbc9af-17b4-4815-b527-9d9d9c5112fc" (UID: "17cbc9af-17b4-4815-b527-9d9d9c5112fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.987523 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17cbc9af-17b4-4815-b527-9d9d9c5112fc-client-ca" (OuterVolumeSpecName: "client-ca") pod "17cbc9af-17b4-4815-b527-9d9d9c5112fc" (UID: "17cbc9af-17b4-4815-b527-9d9d9c5112fc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.987996 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-config" (OuterVolumeSpecName: "config") pod "6951bfc9-9908-4404-9000-cc243c35a314" (UID: "6951bfc9-9908-4404-9000-cc243c35a314"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.991609 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6951bfc9-9908-4404-9000-cc243c35a314-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6951bfc9-9908-4404-9000-cc243c35a314" (UID: "6951bfc9-9908-4404-9000-cc243c35a314"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.991669 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6951bfc9-9908-4404-9000-cc243c35a314-kube-api-access-f7g4f" (OuterVolumeSpecName: "kube-api-access-f7g4f") pod "6951bfc9-9908-4404-9000-cc243c35a314" (UID: "6951bfc9-9908-4404-9000-cc243c35a314"). InnerVolumeSpecName "kube-api-access-f7g4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.991682 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17cbc9af-17b4-4815-b527-9d9d9c5112fc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "17cbc9af-17b4-4815-b527-9d9d9c5112fc" (UID: "17cbc9af-17b4-4815-b527-9d9d9c5112fc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:59:19 crc kubenswrapper[4869]: I0127 09:59:19.991687 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17cbc9af-17b4-4815-b527-9d9d9c5112fc-kube-api-access-hnlgk" (OuterVolumeSpecName: "kube-api-access-hnlgk") pod "17cbc9af-17b4-4815-b527-9d9d9c5112fc" (UID: "17cbc9af-17b4-4815-b527-9d9d9c5112fc"). InnerVolumeSpecName "kube-api-access-hnlgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.018142 4869 generic.go:334] "Generic (PLEG): container finished" podID="6951bfc9-9908-4404-9000-cc243c35a314" containerID="9942e9717e18a890f3a560a96f2d6d4a4b791a8c0a5bdfacd39ada94554f8c12" exitCode=0 Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.018193 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.018202 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" event={"ID":"6951bfc9-9908-4404-9000-cc243c35a314","Type":"ContainerDied","Data":"9942e9717e18a890f3a560a96f2d6d4a4b791a8c0a5bdfacd39ada94554f8c12"} Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.018266 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4sqz8" event={"ID":"6951bfc9-9908-4404-9000-cc243c35a314","Type":"ContainerDied","Data":"333f9d4ced60af2986cdda9275136ff7c26d112571d600b03571ed75bc80bb4e"} Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.018285 4869 scope.go:117] "RemoveContainer" containerID="9942e9717e18a890f3a560a96f2d6d4a4b791a8c0a5bdfacd39ada94554f8c12" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.019851 4869 generic.go:334] "Generic (PLEG): container finished" podID="17cbc9af-17b4-4815-b527-9d9d9c5112fc" containerID="1091c6e1be645b0352ee4de7554eb1fb9b1396b9c0c4ff8ff83edb7a7be5d3dc" exitCode=0 Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.019895 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" event={"ID":"17cbc9af-17b4-4815-b527-9d9d9c5112fc","Type":"ContainerDied","Data":"1091c6e1be645b0352ee4de7554eb1fb9b1396b9c0c4ff8ff83edb7a7be5d3dc"} Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.019917 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" event={"ID":"17cbc9af-17b4-4815-b527-9d9d9c5112fc","Type":"ContainerDied","Data":"2ca11e1802f9fdec39c36a521f2fea78195615800b88319d01d88cafb34f52a3"} Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.019967 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.031792 4869 scope.go:117] "RemoveContainer" containerID="9942e9717e18a890f3a560a96f2d6d4a4b791a8c0a5bdfacd39ada94554f8c12" Jan 27 09:59:20 crc kubenswrapper[4869]: E0127 09:59:20.032263 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9942e9717e18a890f3a560a96f2d6d4a4b791a8c0a5bdfacd39ada94554f8c12\": container with ID starting with 9942e9717e18a890f3a560a96f2d6d4a4b791a8c0a5bdfacd39ada94554f8c12 not found: ID does not exist" containerID="9942e9717e18a890f3a560a96f2d6d4a4b791a8c0a5bdfacd39ada94554f8c12" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.032351 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9942e9717e18a890f3a560a96f2d6d4a4b791a8c0a5bdfacd39ada94554f8c12"} err="failed to get container status \"9942e9717e18a890f3a560a96f2d6d4a4b791a8c0a5bdfacd39ada94554f8c12\": rpc error: code = NotFound desc = could not find container \"9942e9717e18a890f3a560a96f2d6d4a4b791a8c0a5bdfacd39ada94554f8c12\": container with ID starting with 9942e9717e18a890f3a560a96f2d6d4a4b791a8c0a5bdfacd39ada94554f8c12 not found: ID does not exist" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.032416 4869 scope.go:117] "RemoveContainer" containerID="1091c6e1be645b0352ee4de7554eb1fb9b1396b9c0c4ff8ff83edb7a7be5d3dc" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.052393 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4sqz8"] Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.055486 4869 scope.go:117] "RemoveContainer" containerID="1091c6e1be645b0352ee4de7554eb1fb9b1396b9c0c4ff8ff83edb7a7be5d3dc" Jan 27 09:59:20 crc kubenswrapper[4869]: E0127 09:59:20.056016 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1091c6e1be645b0352ee4de7554eb1fb9b1396b9c0c4ff8ff83edb7a7be5d3dc\": container with ID starting with 1091c6e1be645b0352ee4de7554eb1fb9b1396b9c0c4ff8ff83edb7a7be5d3dc not found: ID does not exist" containerID="1091c6e1be645b0352ee4de7554eb1fb9b1396b9c0c4ff8ff83edb7a7be5d3dc" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.056046 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1091c6e1be645b0352ee4de7554eb1fb9b1396b9c0c4ff8ff83edb7a7be5d3dc"} err="failed to get container status \"1091c6e1be645b0352ee4de7554eb1fb9b1396b9c0c4ff8ff83edb7a7be5d3dc\": rpc error: code = NotFound desc = could not find container \"1091c6e1be645b0352ee4de7554eb1fb9b1396b9c0c4ff8ff83edb7a7be5d3dc\": container with ID starting with 1091c6e1be645b0352ee4de7554eb1fb9b1396b9c0c4ff8ff83edb7a7be5d3dc not found: ID does not exist" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.057181 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4sqz8"] Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.060411 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz"] Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.063073 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vwhlz"] Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.087825 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6951bfc9-9908-4404-9000-cc243c35a314-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.087861 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17cbc9af-17b4-4815-b527-9d9d9c5112fc-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.087871 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17cbc9af-17b4-4815-b527-9d9d9c5112fc-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.087879 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7g4f\" (UniqueName: \"kubernetes.io/projected/6951bfc9-9908-4404-9000-cc243c35a314-kube-api-access-f7g4f\") on node \"crc\" DevicePath \"\"" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.087887 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.087895 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.087903 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnlgk\" (UniqueName: \"kubernetes.io/projected/17cbc9af-17b4-4815-b527-9d9d9c5112fc-kube-api-access-hnlgk\") on node \"crc\" DevicePath \"\"" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.087912 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17cbc9af-17b4-4815-b527-9d9d9c5112fc-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.087920 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6951bfc9-9908-4404-9000-cc243c35a314-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.910076 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-546d8d4f9d-pz622"] Jan 27 09:59:20 crc kubenswrapper[4869]: E0127 09:59:20.910688 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17cbc9af-17b4-4815-b527-9d9d9c5112fc" containerName="route-controller-manager" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.910722 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="17cbc9af-17b4-4815-b527-9d9d9c5112fc" containerName="route-controller-manager" Jan 27 09:59:20 crc kubenswrapper[4869]: E0127 09:59:20.910769 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75088e3e-820e-444a-b9d1-ed7be4c7bbad" containerName="registry-server" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.910788 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="75088e3e-820e-444a-b9d1-ed7be4c7bbad" containerName="registry-server" Jan 27 09:59:20 crc kubenswrapper[4869]: E0127 09:59:20.910813 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95593b9c-39c7-40b7-aadc-4b8292206b30" containerName="registry-server" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.910924 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="95593b9c-39c7-40b7-aadc-4b8292206b30" containerName="registry-server" Jan 27 09:59:20 crc kubenswrapper[4869]: E0127 09:59:20.910968 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79eecc44-f04a-43b0-ae75-84843aa45574" containerName="extract-utilities" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.910999 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="79eecc44-f04a-43b0-ae75-84843aa45574" containerName="extract-utilities" Jan 27 09:59:20 crc kubenswrapper[4869]: E0127 09:59:20.911028 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75088e3e-820e-444a-b9d1-ed7be4c7bbad" containerName="extract-utilities" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.911045 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="75088e3e-820e-444a-b9d1-ed7be4c7bbad" containerName="extract-utilities" Jan 27 09:59:20 crc kubenswrapper[4869]: E0127 09:59:20.911091 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb3e386-81b3-48d9-ba20-8a27ea09d026" containerName="extract-utilities" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.911110 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb3e386-81b3-48d9-ba20-8a27ea09d026" containerName="extract-utilities" Jan 27 09:59:20 crc kubenswrapper[4869]: E0127 09:59:20.911207 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95593b9c-39c7-40b7-aadc-4b8292206b30" containerName="extract-content" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.911263 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="95593b9c-39c7-40b7-aadc-4b8292206b30" containerName="extract-content" Jan 27 09:59:20 crc kubenswrapper[4869]: E0127 09:59:20.911663 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6951bfc9-9908-4404-9000-cc243c35a314" containerName="controller-manager" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.911677 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6951bfc9-9908-4404-9000-cc243c35a314" containerName="controller-manager" Jan 27 09:59:20 crc kubenswrapper[4869]: E0127 09:59:20.911718 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79eecc44-f04a-43b0-ae75-84843aa45574" containerName="extract-content" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.911728 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="79eecc44-f04a-43b0-ae75-84843aa45574" containerName="extract-content" Jan 27 09:59:20 crc kubenswrapper[4869]: E0127 09:59:20.912174 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a574e648-77e2-46a1-a2ad-af18e6e9ad57" containerName="marketplace-operator" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.912202 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a574e648-77e2-46a1-a2ad-af18e6e9ad57" containerName="marketplace-operator" Jan 27 09:59:20 crc kubenswrapper[4869]: E0127 09:59:20.912219 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95593b9c-39c7-40b7-aadc-4b8292206b30" containerName="extract-utilities" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.912228 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="95593b9c-39c7-40b7-aadc-4b8292206b30" containerName="extract-utilities" Jan 27 09:59:20 crc kubenswrapper[4869]: E0127 09:59:20.912271 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb3e386-81b3-48d9-ba20-8a27ea09d026" containerName="extract-content" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.912280 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb3e386-81b3-48d9-ba20-8a27ea09d026" containerName="extract-content" Jan 27 09:59:20 crc kubenswrapper[4869]: E0127 09:59:20.912291 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79eecc44-f04a-43b0-ae75-84843aa45574" containerName="registry-server" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.912300 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="79eecc44-f04a-43b0-ae75-84843aa45574" containerName="registry-server" Jan 27 09:59:20 crc kubenswrapper[4869]: E0127 09:59:20.912312 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb3e386-81b3-48d9-ba20-8a27ea09d026" containerName="registry-server" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.912319 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb3e386-81b3-48d9-ba20-8a27ea09d026" containerName="registry-server" Jan 27 09:59:20 crc kubenswrapper[4869]: E0127 09:59:20.912329 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75088e3e-820e-444a-b9d1-ed7be4c7bbad" containerName="extract-content" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.912337 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="75088e3e-820e-444a-b9d1-ed7be4c7bbad" containerName="extract-content" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.912523 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="75088e3e-820e-444a-b9d1-ed7be4c7bbad" containerName="registry-server" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.912536 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="17cbc9af-17b4-4815-b527-9d9d9c5112fc" containerName="route-controller-manager" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.912547 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6951bfc9-9908-4404-9000-cc243c35a314" containerName="controller-manager" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.912561 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a574e648-77e2-46a1-a2ad-af18e6e9ad57" containerName="marketplace-operator" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.912572 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="79eecc44-f04a-43b0-ae75-84843aa45574" containerName="registry-server" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.912583 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb3e386-81b3-48d9-ba20-8a27ea09d026" containerName="registry-server" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.912593 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="95593b9c-39c7-40b7-aadc-4b8292206b30" containerName="registry-server" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.913128 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.917338 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.917486 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.917588 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.917746 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.917895 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.918801 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.927228 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr"] Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.928541 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.932877 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.933062 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.934913 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr"] Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.935517 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.935784 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.936095 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.938038 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.940008 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 09:59:20 crc kubenswrapper[4869]: I0127 09:59:20.941378 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-546d8d4f9d-pz622"] Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.011581 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27f50297-ffae-4fb1-bbef-1a1c6dee329f-serving-cert\") pod \"controller-manager-546d8d4f9d-pz622\" (UID: \"27f50297-ffae-4fb1-bbef-1a1c6dee329f\") " pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.011675 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzkqd\" (UniqueName: \"kubernetes.io/projected/27f50297-ffae-4fb1-bbef-1a1c6dee329f-kube-api-access-lzkqd\") pod \"controller-manager-546d8d4f9d-pz622\" (UID: \"27f50297-ffae-4fb1-bbef-1a1c6dee329f\") " pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.011740 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27f50297-ffae-4fb1-bbef-1a1c6dee329f-proxy-ca-bundles\") pod \"controller-manager-546d8d4f9d-pz622\" (UID: \"27f50297-ffae-4fb1-bbef-1a1c6dee329f\") " pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.011783 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01f69cc1-d403-4a61-be3c-d39ca0d91737-serving-cert\") pod \"route-controller-manager-d776c45b8-jzvkr\" (UID: \"01f69cc1-d403-4a61-be3c-d39ca0d91737\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.011877 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27f50297-ffae-4fb1-bbef-1a1c6dee329f-client-ca\") pod \"controller-manager-546d8d4f9d-pz622\" (UID: \"27f50297-ffae-4fb1-bbef-1a1c6dee329f\") " pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.011918 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f69cc1-d403-4a61-be3c-d39ca0d91737-config\") pod \"route-controller-manager-d776c45b8-jzvkr\" (UID: \"01f69cc1-d403-4a61-be3c-d39ca0d91737\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.011959 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qc7k\" (UniqueName: \"kubernetes.io/projected/01f69cc1-d403-4a61-be3c-d39ca0d91737-kube-api-access-5qc7k\") pod \"route-controller-manager-d776c45b8-jzvkr\" (UID: \"01f69cc1-d403-4a61-be3c-d39ca0d91737\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.012107 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01f69cc1-d403-4a61-be3c-d39ca0d91737-client-ca\") pod \"route-controller-manager-d776c45b8-jzvkr\" (UID: \"01f69cc1-d403-4a61-be3c-d39ca0d91737\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.012345 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27f50297-ffae-4fb1-bbef-1a1c6dee329f-config\") pod \"controller-manager-546d8d4f9d-pz622\" (UID: \"27f50297-ffae-4fb1-bbef-1a1c6dee329f\") " pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.115134 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27f50297-ffae-4fb1-bbef-1a1c6dee329f-config\") pod \"controller-manager-546d8d4f9d-pz622\" (UID: \"27f50297-ffae-4fb1-bbef-1a1c6dee329f\") " pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.115201 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27f50297-ffae-4fb1-bbef-1a1c6dee329f-serving-cert\") pod \"controller-manager-546d8d4f9d-pz622\" (UID: \"27f50297-ffae-4fb1-bbef-1a1c6dee329f\") " pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.115619 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzkqd\" (UniqueName: \"kubernetes.io/projected/27f50297-ffae-4fb1-bbef-1a1c6dee329f-kube-api-access-lzkqd\") pod \"controller-manager-546d8d4f9d-pz622\" (UID: \"27f50297-ffae-4fb1-bbef-1a1c6dee329f\") " pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.115650 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27f50297-ffae-4fb1-bbef-1a1c6dee329f-proxy-ca-bundles\") pod \"controller-manager-546d8d4f9d-pz622\" (UID: \"27f50297-ffae-4fb1-bbef-1a1c6dee329f\") " pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.115699 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01f69cc1-d403-4a61-be3c-d39ca0d91737-serving-cert\") pod \"route-controller-manager-d776c45b8-jzvkr\" (UID: \"01f69cc1-d403-4a61-be3c-d39ca0d91737\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.115735 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27f50297-ffae-4fb1-bbef-1a1c6dee329f-client-ca\") pod \"controller-manager-546d8d4f9d-pz622\" (UID: \"27f50297-ffae-4fb1-bbef-1a1c6dee329f\") " pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.115762 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f69cc1-d403-4a61-be3c-d39ca0d91737-config\") pod \"route-controller-manager-d776c45b8-jzvkr\" (UID: \"01f69cc1-d403-4a61-be3c-d39ca0d91737\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.115790 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qc7k\" (UniqueName: \"kubernetes.io/projected/01f69cc1-d403-4a61-be3c-d39ca0d91737-kube-api-access-5qc7k\") pod \"route-controller-manager-d776c45b8-jzvkr\" (UID: \"01f69cc1-d403-4a61-be3c-d39ca0d91737\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.115814 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01f69cc1-d403-4a61-be3c-d39ca0d91737-client-ca\") pod \"route-controller-manager-d776c45b8-jzvkr\" (UID: \"01f69cc1-d403-4a61-be3c-d39ca0d91737\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.117877 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27f50297-ffae-4fb1-bbef-1a1c6dee329f-client-ca\") pod \"controller-manager-546d8d4f9d-pz622\" (UID: \"27f50297-ffae-4fb1-bbef-1a1c6dee329f\") " pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.117961 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27f50297-ffae-4fb1-bbef-1a1c6dee329f-config\") pod \"controller-manager-546d8d4f9d-pz622\" (UID: \"27f50297-ffae-4fb1-bbef-1a1c6dee329f\") " pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.118295 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27f50297-ffae-4fb1-bbef-1a1c6dee329f-proxy-ca-bundles\") pod \"controller-manager-546d8d4f9d-pz622\" (UID: \"27f50297-ffae-4fb1-bbef-1a1c6dee329f\") " pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.118966 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f69cc1-d403-4a61-be3c-d39ca0d91737-config\") pod \"route-controller-manager-d776c45b8-jzvkr\" (UID: \"01f69cc1-d403-4a61-be3c-d39ca0d91737\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.121071 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01f69cc1-d403-4a61-be3c-d39ca0d91737-client-ca\") pod \"route-controller-manager-d776c45b8-jzvkr\" (UID: \"01f69cc1-d403-4a61-be3c-d39ca0d91737\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.123069 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01f69cc1-d403-4a61-be3c-d39ca0d91737-serving-cert\") pod \"route-controller-manager-d776c45b8-jzvkr\" (UID: \"01f69cc1-d403-4a61-be3c-d39ca0d91737\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.135005 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27f50297-ffae-4fb1-bbef-1a1c6dee329f-serving-cert\") pod \"controller-manager-546d8d4f9d-pz622\" (UID: \"27f50297-ffae-4fb1-bbef-1a1c6dee329f\") " pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.141329 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzkqd\" (UniqueName: \"kubernetes.io/projected/27f50297-ffae-4fb1-bbef-1a1c6dee329f-kube-api-access-lzkqd\") pod \"controller-manager-546d8d4f9d-pz622\" (UID: \"27f50297-ffae-4fb1-bbef-1a1c6dee329f\") " pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.147587 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qc7k\" (UniqueName: \"kubernetes.io/projected/01f69cc1-d403-4a61-be3c-d39ca0d91737-kube-api-access-5qc7k\") pod \"route-controller-manager-d776c45b8-jzvkr\" (UID: \"01f69cc1-d403-4a61-be3c-d39ca0d91737\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.227849 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.260330 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.474088 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-546d8d4f9d-pz622"] Jan 27 09:59:21 crc kubenswrapper[4869]: I0127 09:59:21.728269 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr"] Jan 27 09:59:22 crc kubenswrapper[4869]: I0127 09:59:22.038876 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17cbc9af-17b4-4815-b527-9d9d9c5112fc" path="/var/lib/kubelet/pods/17cbc9af-17b4-4815-b527-9d9d9c5112fc/volumes" Jan 27 09:59:22 crc kubenswrapper[4869]: I0127 09:59:22.039757 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6951bfc9-9908-4404-9000-cc243c35a314" path="/var/lib/kubelet/pods/6951bfc9-9908-4404-9000-cc243c35a314/volumes" Jan 27 09:59:22 crc kubenswrapper[4869]: I0127 09:59:22.040253 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:22 crc kubenswrapper[4869]: I0127 09:59:22.040289 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" event={"ID":"01f69cc1-d403-4a61-be3c-d39ca0d91737","Type":"ContainerStarted","Data":"89f842e5f0f91ea0bcbc0afa4ac1ebb5d921658b3131a0a91c717305b37ef44c"} Jan 27 09:59:22 crc kubenswrapper[4869]: I0127 09:59:22.040310 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:22 crc kubenswrapper[4869]: I0127 09:59:22.040324 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" event={"ID":"01f69cc1-d403-4a61-be3c-d39ca0d91737","Type":"ContainerStarted","Data":"d04d89b215f71db330e951713197c1c1c87bd76236340322f794f24bd6bf00bd"} Jan 27 09:59:22 crc kubenswrapper[4869]: I0127 09:59:22.040338 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" event={"ID":"27f50297-ffae-4fb1-bbef-1a1c6dee329f","Type":"ContainerStarted","Data":"838a60b74b10c833877a59b3bf2087698f6bc0fa6fa39fb570bb2649a8eedca4"} Jan 27 09:59:22 crc kubenswrapper[4869]: I0127 09:59:22.040350 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" event={"ID":"27f50297-ffae-4fb1-bbef-1a1c6dee329f","Type":"ContainerStarted","Data":"330307bdb2c87b3cdeb5df43180c673de8a4093201456a555320cfe927c5d8c4"} Jan 27 09:59:22 crc kubenswrapper[4869]: I0127 09:59:22.048708 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" Jan 27 09:59:22 crc kubenswrapper[4869]: I0127 09:59:22.084530 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-546d8d4f9d-pz622" podStartSLOduration=3.084512703 podStartE2EDuration="3.084512703s" podCreationTimestamp="2026-01-27 09:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:59:22.081000725 +0000 UTC m=+330.701424818" watchObservedRunningTime="2026-01-27 09:59:22.084512703 +0000 UTC m=+330.704936796" Jan 27 09:59:22 crc kubenswrapper[4869]: I0127 09:59:22.114337 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" podStartSLOduration=3.114320905 podStartE2EDuration="3.114320905s" podCreationTimestamp="2026-01-27 09:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:59:22.11255992 +0000 UTC m=+330.732984013" watchObservedRunningTime="2026-01-27 09:59:22.114320905 +0000 UTC m=+330.734744988" Jan 27 09:59:22 crc kubenswrapper[4869]: I0127 09:59:22.566737 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:35 crc kubenswrapper[4869]: I0127 09:59:35.595307 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr"] Jan 27 09:59:35 crc kubenswrapper[4869]: I0127 09:59:35.595946 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" podUID="01f69cc1-d403-4a61-be3c-d39ca0d91737" containerName="route-controller-manager" containerID="cri-o://89f842e5f0f91ea0bcbc0afa4ac1ebb5d921658b3131a0a91c717305b37ef44c" gracePeriod=30 Jan 27 09:59:35 crc kubenswrapper[4869]: I0127 09:59:35.992817 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.101845 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01f69cc1-d403-4a61-be3c-d39ca0d91737-client-ca\") pod \"01f69cc1-d403-4a61-be3c-d39ca0d91737\" (UID: \"01f69cc1-d403-4a61-be3c-d39ca0d91737\") " Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.101959 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01f69cc1-d403-4a61-be3c-d39ca0d91737-serving-cert\") pod \"01f69cc1-d403-4a61-be3c-d39ca0d91737\" (UID: \"01f69cc1-d403-4a61-be3c-d39ca0d91737\") " Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.102028 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qc7k\" (UniqueName: \"kubernetes.io/projected/01f69cc1-d403-4a61-be3c-d39ca0d91737-kube-api-access-5qc7k\") pod \"01f69cc1-d403-4a61-be3c-d39ca0d91737\" (UID: \"01f69cc1-d403-4a61-be3c-d39ca0d91737\") " Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.102069 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f69cc1-d403-4a61-be3c-d39ca0d91737-config\") pod \"01f69cc1-d403-4a61-be3c-d39ca0d91737\" (UID: \"01f69cc1-d403-4a61-be3c-d39ca0d91737\") " Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.103096 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01f69cc1-d403-4a61-be3c-d39ca0d91737-config" (OuterVolumeSpecName: "config") pod "01f69cc1-d403-4a61-be3c-d39ca0d91737" (UID: "01f69cc1-d403-4a61-be3c-d39ca0d91737"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.103126 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01f69cc1-d403-4a61-be3c-d39ca0d91737-client-ca" (OuterVolumeSpecName: "client-ca") pod "01f69cc1-d403-4a61-be3c-d39ca0d91737" (UID: "01f69cc1-d403-4a61-be3c-d39ca0d91737"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.107167 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f69cc1-d403-4a61-be3c-d39ca0d91737-kube-api-access-5qc7k" (OuterVolumeSpecName: "kube-api-access-5qc7k") pod "01f69cc1-d403-4a61-be3c-d39ca0d91737" (UID: "01f69cc1-d403-4a61-be3c-d39ca0d91737"). InnerVolumeSpecName "kube-api-access-5qc7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.108447 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f69cc1-d403-4a61-be3c-d39ca0d91737-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01f69cc1-d403-4a61-be3c-d39ca0d91737" (UID: "01f69cc1-d403-4a61-be3c-d39ca0d91737"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.112744 4869 generic.go:334] "Generic (PLEG): container finished" podID="01f69cc1-d403-4a61-be3c-d39ca0d91737" containerID="89f842e5f0f91ea0bcbc0afa4ac1ebb5d921658b3131a0a91c717305b37ef44c" exitCode=0 Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.112790 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" event={"ID":"01f69cc1-d403-4a61-be3c-d39ca0d91737","Type":"ContainerDied","Data":"89f842e5f0f91ea0bcbc0afa4ac1ebb5d921658b3131a0a91c717305b37ef44c"} Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.112825 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.112851 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr" event={"ID":"01f69cc1-d403-4a61-be3c-d39ca0d91737","Type":"ContainerDied","Data":"d04d89b215f71db330e951713197c1c1c87bd76236340322f794f24bd6bf00bd"} Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.112877 4869 scope.go:117] "RemoveContainer" containerID="89f842e5f0f91ea0bcbc0afa4ac1ebb5d921658b3131a0a91c717305b37ef44c" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.148261 4869 scope.go:117] "RemoveContainer" containerID="89f842e5f0f91ea0bcbc0afa4ac1ebb5d921658b3131a0a91c717305b37ef44c" Jan 27 09:59:36 crc kubenswrapper[4869]: E0127 09:59:36.148851 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89f842e5f0f91ea0bcbc0afa4ac1ebb5d921658b3131a0a91c717305b37ef44c\": container with ID starting with 89f842e5f0f91ea0bcbc0afa4ac1ebb5d921658b3131a0a91c717305b37ef44c not found: ID does not exist" containerID="89f842e5f0f91ea0bcbc0afa4ac1ebb5d921658b3131a0a91c717305b37ef44c" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.148920 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89f842e5f0f91ea0bcbc0afa4ac1ebb5d921658b3131a0a91c717305b37ef44c"} err="failed to get container status \"89f842e5f0f91ea0bcbc0afa4ac1ebb5d921658b3131a0a91c717305b37ef44c\": rpc error: code = NotFound desc = could not find container \"89f842e5f0f91ea0bcbc0afa4ac1ebb5d921658b3131a0a91c717305b37ef44c\": container with ID starting with 89f842e5f0f91ea0bcbc0afa4ac1ebb5d921658b3131a0a91c717305b37ef44c not found: ID does not exist" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.151960 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr"] Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.154746 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d776c45b8-jzvkr"] Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.204158 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f69cc1-d403-4a61-be3c-d39ca0d91737-config\") on node \"crc\" DevicePath \"\"" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.204196 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01f69cc1-d403-4a61-be3c-d39ca0d91737-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.204230 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01f69cc1-d403-4a61-be3c-d39ca0d91737-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.204243 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qc7k\" (UniqueName: \"kubernetes.io/projected/01f69cc1-d403-4a61-be3c-d39ca0d91737-kube-api-access-5qc7k\") on node \"crc\" DevicePath \"\"" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.903978 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw"] Jan 27 09:59:36 crc kubenswrapper[4869]: E0127 09:59:36.904212 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f69cc1-d403-4a61-be3c-d39ca0d91737" containerName="route-controller-manager" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.904226 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f69cc1-d403-4a61-be3c-d39ca0d91737" containerName="route-controller-manager" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.904323 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f69cc1-d403-4a61-be3c-d39ca0d91737" containerName="route-controller-manager" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.904702 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.907127 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.907865 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.908542 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.908618 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.909746 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.913591 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 09:59:36 crc kubenswrapper[4869]: I0127 09:59:36.927889 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw"] Jan 27 09:59:37 crc kubenswrapper[4869]: I0127 09:59:37.014797 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-745s9\" (UniqueName: \"kubernetes.io/projected/8cfae142-2430-4929-b759-bc2b1d409090-kube-api-access-745s9\") pod \"route-controller-manager-6c9bcf89cc-drfnw\" (UID: \"8cfae142-2430-4929-b759-bc2b1d409090\") " pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 09:59:37 crc kubenswrapper[4869]: I0127 09:59:37.014898 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cfae142-2430-4929-b759-bc2b1d409090-serving-cert\") pod \"route-controller-manager-6c9bcf89cc-drfnw\" (UID: \"8cfae142-2430-4929-b759-bc2b1d409090\") " pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 09:59:37 crc kubenswrapper[4869]: I0127 09:59:37.014923 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfae142-2430-4929-b759-bc2b1d409090-config\") pod \"route-controller-manager-6c9bcf89cc-drfnw\" (UID: \"8cfae142-2430-4929-b759-bc2b1d409090\") " pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 09:59:37 crc kubenswrapper[4869]: I0127 09:59:37.014945 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cfae142-2430-4929-b759-bc2b1d409090-client-ca\") pod \"route-controller-manager-6c9bcf89cc-drfnw\" (UID: \"8cfae142-2430-4929-b759-bc2b1d409090\") " pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 09:59:37 crc kubenswrapper[4869]: I0127 09:59:37.116087 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cfae142-2430-4929-b759-bc2b1d409090-serving-cert\") pod \"route-controller-manager-6c9bcf89cc-drfnw\" (UID: \"8cfae142-2430-4929-b759-bc2b1d409090\") " pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 09:59:37 crc kubenswrapper[4869]: I0127 09:59:37.116164 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfae142-2430-4929-b759-bc2b1d409090-config\") pod \"route-controller-manager-6c9bcf89cc-drfnw\" (UID: \"8cfae142-2430-4929-b759-bc2b1d409090\") " pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 09:59:37 crc kubenswrapper[4869]: I0127 09:59:37.116211 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cfae142-2430-4929-b759-bc2b1d409090-client-ca\") pod \"route-controller-manager-6c9bcf89cc-drfnw\" (UID: \"8cfae142-2430-4929-b759-bc2b1d409090\") " pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 09:59:37 crc kubenswrapper[4869]: I0127 09:59:37.116299 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-745s9\" (UniqueName: \"kubernetes.io/projected/8cfae142-2430-4929-b759-bc2b1d409090-kube-api-access-745s9\") pod \"route-controller-manager-6c9bcf89cc-drfnw\" (UID: \"8cfae142-2430-4929-b759-bc2b1d409090\") " pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 09:59:37 crc kubenswrapper[4869]: I0127 09:59:37.117440 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cfae142-2430-4929-b759-bc2b1d409090-client-ca\") pod \"route-controller-manager-6c9bcf89cc-drfnw\" (UID: \"8cfae142-2430-4929-b759-bc2b1d409090\") " pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 09:59:37 crc kubenswrapper[4869]: I0127 09:59:37.117598 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfae142-2430-4929-b759-bc2b1d409090-config\") pod \"route-controller-manager-6c9bcf89cc-drfnw\" (UID: \"8cfae142-2430-4929-b759-bc2b1d409090\") " pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 09:59:37 crc kubenswrapper[4869]: I0127 09:59:37.120052 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cfae142-2430-4929-b759-bc2b1d409090-serving-cert\") pod \"route-controller-manager-6c9bcf89cc-drfnw\" (UID: \"8cfae142-2430-4929-b759-bc2b1d409090\") " pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 09:59:37 crc kubenswrapper[4869]: I0127 09:59:37.172466 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-745s9\" (UniqueName: \"kubernetes.io/projected/8cfae142-2430-4929-b759-bc2b1d409090-kube-api-access-745s9\") pod \"route-controller-manager-6c9bcf89cc-drfnw\" (UID: \"8cfae142-2430-4929-b759-bc2b1d409090\") " pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 09:59:37 crc kubenswrapper[4869]: I0127 09:59:37.231533 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 09:59:37 crc kubenswrapper[4869]: I0127 09:59:37.632126 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw"] Jan 27 09:59:37 crc kubenswrapper[4869]: W0127 09:59:37.634284 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cfae142_2430_4929_b759_bc2b1d409090.slice/crio-c7505f143cd60fe582df4e1b3c4303ae39e21a401a8c2e1bdb85d5fafe40a3b0 WatchSource:0}: Error finding container c7505f143cd60fe582df4e1b3c4303ae39e21a401a8c2e1bdb85d5fafe40a3b0: Status 404 returned error can't find the container with id c7505f143cd60fe582df4e1b3c4303ae39e21a401a8c2e1bdb85d5fafe40a3b0 Jan 27 09:59:38 crc kubenswrapper[4869]: I0127 09:59:38.043628 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01f69cc1-d403-4a61-be3c-d39ca0d91737" path="/var/lib/kubelet/pods/01f69cc1-d403-4a61-be3c-d39ca0d91737/volumes" Jan 27 09:59:38 crc kubenswrapper[4869]: I0127 09:59:38.131627 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" event={"ID":"8cfae142-2430-4929-b759-bc2b1d409090","Type":"ContainerStarted","Data":"3a489395e58978a02ee6a7f336db87bd65e59bee1448fc850cc7fa6ab031ef24"} Jan 27 09:59:38 crc kubenswrapper[4869]: I0127 09:59:38.131679 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" event={"ID":"8cfae142-2430-4929-b759-bc2b1d409090","Type":"ContainerStarted","Data":"c7505f143cd60fe582df4e1b3c4303ae39e21a401a8c2e1bdb85d5fafe40a3b0"} Jan 27 09:59:38 crc kubenswrapper[4869]: I0127 09:59:38.132184 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 09:59:38 crc kubenswrapper[4869]: I0127 09:59:38.161147 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" podStartSLOduration=3.161116163 podStartE2EDuration="3.161116163s" podCreationTimestamp="2026-01-27 09:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 09:59:38.157023966 +0000 UTC m=+346.777448049" watchObservedRunningTime="2026-01-27 09:59:38.161116163 +0000 UTC m=+346.781540276" Jan 27 09:59:38 crc kubenswrapper[4869]: I0127 09:59:38.310426 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 09:59:44 crc kubenswrapper[4869]: I0127 09:59:44.944145 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jnkxt"] Jan 27 09:59:44 crc kubenswrapper[4869]: I0127 09:59:44.945781 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jnkxt" Jan 27 09:59:44 crc kubenswrapper[4869]: I0127 09:59:44.948720 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 09:59:44 crc kubenswrapper[4869]: I0127 09:59:44.957602 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jnkxt"] Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.008975 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp4px\" (UniqueName: \"kubernetes.io/projected/9051fa8e-7223-46e5-b408-a806a99c45c2-kube-api-access-lp4px\") pod \"community-operators-jnkxt\" (UID: \"9051fa8e-7223-46e5-b408-a806a99c45c2\") " pod="openshift-marketplace/community-operators-jnkxt" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.009045 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9051fa8e-7223-46e5-b408-a806a99c45c2-utilities\") pod \"community-operators-jnkxt\" (UID: \"9051fa8e-7223-46e5-b408-a806a99c45c2\") " pod="openshift-marketplace/community-operators-jnkxt" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.009132 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9051fa8e-7223-46e5-b408-a806a99c45c2-catalog-content\") pod \"community-operators-jnkxt\" (UID: \"9051fa8e-7223-46e5-b408-a806a99c45c2\") " pod="openshift-marketplace/community-operators-jnkxt" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.110134 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9051fa8e-7223-46e5-b408-a806a99c45c2-catalog-content\") pod \"community-operators-jnkxt\" (UID: \"9051fa8e-7223-46e5-b408-a806a99c45c2\") " pod="openshift-marketplace/community-operators-jnkxt" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.110359 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp4px\" (UniqueName: \"kubernetes.io/projected/9051fa8e-7223-46e5-b408-a806a99c45c2-kube-api-access-lp4px\") pod \"community-operators-jnkxt\" (UID: \"9051fa8e-7223-46e5-b408-a806a99c45c2\") " pod="openshift-marketplace/community-operators-jnkxt" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.111000 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9051fa8e-7223-46e5-b408-a806a99c45c2-catalog-content\") pod \"community-operators-jnkxt\" (UID: \"9051fa8e-7223-46e5-b408-a806a99c45c2\") " pod="openshift-marketplace/community-operators-jnkxt" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.111160 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9051fa8e-7223-46e5-b408-a806a99c45c2-utilities\") pod \"community-operators-jnkxt\" (UID: \"9051fa8e-7223-46e5-b408-a806a99c45c2\") " pod="openshift-marketplace/community-operators-jnkxt" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.111803 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9051fa8e-7223-46e5-b408-a806a99c45c2-utilities\") pod \"community-operators-jnkxt\" (UID: \"9051fa8e-7223-46e5-b408-a806a99c45c2\") " pod="openshift-marketplace/community-operators-jnkxt" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.127999 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp4px\" (UniqueName: \"kubernetes.io/projected/9051fa8e-7223-46e5-b408-a806a99c45c2-kube-api-access-lp4px\") pod \"community-operators-jnkxt\" (UID: \"9051fa8e-7223-46e5-b408-a806a99c45c2\") " pod="openshift-marketplace/community-operators-jnkxt" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.137533 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hhg6m"] Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.138717 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hhg6m" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.141169 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.155387 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hhg6m"] Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.212750 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmn9j\" (UniqueName: \"kubernetes.io/projected/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-kube-api-access-vmn9j\") pod \"certified-operators-hhg6m\" (UID: \"4ab43b25-6ea3-4061-9c4a-6fb427539d3c\") " pod="openshift-marketplace/certified-operators-hhg6m" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.212799 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-utilities\") pod \"certified-operators-hhg6m\" (UID: \"4ab43b25-6ea3-4061-9c4a-6fb427539d3c\") " pod="openshift-marketplace/certified-operators-hhg6m" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.212984 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-catalog-content\") pod \"certified-operators-hhg6m\" (UID: \"4ab43b25-6ea3-4061-9c4a-6fb427539d3c\") " pod="openshift-marketplace/certified-operators-hhg6m" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.266971 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jnkxt" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.314006 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-catalog-content\") pod \"certified-operators-hhg6m\" (UID: \"4ab43b25-6ea3-4061-9c4a-6fb427539d3c\") " pod="openshift-marketplace/certified-operators-hhg6m" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.314339 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmn9j\" (UniqueName: \"kubernetes.io/projected/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-kube-api-access-vmn9j\") pod \"certified-operators-hhg6m\" (UID: \"4ab43b25-6ea3-4061-9c4a-6fb427539d3c\") " pod="openshift-marketplace/certified-operators-hhg6m" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.314374 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-utilities\") pod \"certified-operators-hhg6m\" (UID: \"4ab43b25-6ea3-4061-9c4a-6fb427539d3c\") " pod="openshift-marketplace/certified-operators-hhg6m" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.314992 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-utilities\") pod \"certified-operators-hhg6m\" (UID: \"4ab43b25-6ea3-4061-9c4a-6fb427539d3c\") " pod="openshift-marketplace/certified-operators-hhg6m" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.315225 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-catalog-content\") pod \"certified-operators-hhg6m\" (UID: \"4ab43b25-6ea3-4061-9c4a-6fb427539d3c\") " pod="openshift-marketplace/certified-operators-hhg6m" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.330741 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmn9j\" (UniqueName: \"kubernetes.io/projected/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-kube-api-access-vmn9j\") pod \"certified-operators-hhg6m\" (UID: \"4ab43b25-6ea3-4061-9c4a-6fb427539d3c\") " pod="openshift-marketplace/certified-operators-hhg6m" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.489498 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hhg6m" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.671137 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jnkxt"] Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.698025 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.698288 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 09:59:45 crc kubenswrapper[4869]: I0127 09:59:45.887312 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hhg6m"] Jan 27 09:59:45 crc kubenswrapper[4869]: W0127 09:59:45.903352 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ab43b25_6ea3_4061_9c4a_6fb427539d3c.slice/crio-d8339f6b3731afa0f7f82161db5709e48014e75eab327b6c319e053f17d038ed WatchSource:0}: Error finding container d8339f6b3731afa0f7f82161db5709e48014e75eab327b6c319e053f17d038ed: Status 404 returned error can't find the container with id d8339f6b3731afa0f7f82161db5709e48014e75eab327b6c319e053f17d038ed Jan 27 09:59:46 crc kubenswrapper[4869]: I0127 09:59:46.436027 4869 generic.go:334] "Generic (PLEG): container finished" podID="4ab43b25-6ea3-4061-9c4a-6fb427539d3c" containerID="62050a7d5d7288948279a8de8f96e19b1c8dc0a6354e01f169572131345cafe0" exitCode=0 Jan 27 09:59:46 crc kubenswrapper[4869]: I0127 09:59:46.436115 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhg6m" event={"ID":"4ab43b25-6ea3-4061-9c4a-6fb427539d3c","Type":"ContainerDied","Data":"62050a7d5d7288948279a8de8f96e19b1c8dc0a6354e01f169572131345cafe0"} Jan 27 09:59:46 crc kubenswrapper[4869]: I0127 09:59:46.436174 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhg6m" event={"ID":"4ab43b25-6ea3-4061-9c4a-6fb427539d3c","Type":"ContainerStarted","Data":"d8339f6b3731afa0f7f82161db5709e48014e75eab327b6c319e053f17d038ed"} Jan 27 09:59:46 crc kubenswrapper[4869]: I0127 09:59:46.437882 4869 generic.go:334] "Generic (PLEG): container finished" podID="9051fa8e-7223-46e5-b408-a806a99c45c2" containerID="16aa166edde7a89581a53a60925ceba4bf393d36b76d061f77d84e758ccc1462" exitCode=0 Jan 27 09:59:46 crc kubenswrapper[4869]: I0127 09:59:46.438289 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jnkxt" event={"ID":"9051fa8e-7223-46e5-b408-a806a99c45c2","Type":"ContainerDied","Data":"16aa166edde7a89581a53a60925ceba4bf393d36b76d061f77d84e758ccc1462"} Jan 27 09:59:46 crc kubenswrapper[4869]: I0127 09:59:46.438323 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jnkxt" event={"ID":"9051fa8e-7223-46e5-b408-a806a99c45c2","Type":"ContainerStarted","Data":"b7739e045f2d015abc01d0657f2f10f304daa6756e9c4e46e7e7502d017e0e00"} Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.344484 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tzfz8"] Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.346293 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tzfz8" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.350231 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.350962 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tzfz8"] Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.438520 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf753a6b-b086-4055-8232-efcb9ed72ac6-catalog-content\") pod \"redhat-marketplace-tzfz8\" (UID: \"bf753a6b-b086-4055-8232-efcb9ed72ac6\") " pod="openshift-marketplace/redhat-marketplace-tzfz8" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.438626 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf753a6b-b086-4055-8232-efcb9ed72ac6-utilities\") pod \"redhat-marketplace-tzfz8\" (UID: \"bf753a6b-b086-4055-8232-efcb9ed72ac6\") " pod="openshift-marketplace/redhat-marketplace-tzfz8" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.438668 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flhbg\" (UniqueName: \"kubernetes.io/projected/bf753a6b-b086-4055-8232-efcb9ed72ac6-kube-api-access-flhbg\") pod \"redhat-marketplace-tzfz8\" (UID: \"bf753a6b-b086-4055-8232-efcb9ed72ac6\") " pod="openshift-marketplace/redhat-marketplace-tzfz8" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.443728 4869 generic.go:334] "Generic (PLEG): container finished" podID="9051fa8e-7223-46e5-b408-a806a99c45c2" containerID="f69e7f6aaf174f684c37bc91408e95a075aa3f4422eb3d50fe9df65fbc4f4736" exitCode=0 Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.443807 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jnkxt" event={"ID":"9051fa8e-7223-46e5-b408-a806a99c45c2","Type":"ContainerDied","Data":"f69e7f6aaf174f684c37bc91408e95a075aa3f4422eb3d50fe9df65fbc4f4736"} Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.445725 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhg6m" event={"ID":"4ab43b25-6ea3-4061-9c4a-6fb427539d3c","Type":"ContainerDied","Data":"4acc83b116e18792de3b92e104a1773260360831ca3d74107941934a3c0fe741"} Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.447306 4869 generic.go:334] "Generic (PLEG): container finished" podID="4ab43b25-6ea3-4061-9c4a-6fb427539d3c" containerID="4acc83b116e18792de3b92e104a1773260360831ca3d74107941934a3c0fe741" exitCode=0 Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.536710 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zqvpq"] Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.537651 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqvpq" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.543090 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf753a6b-b086-4055-8232-efcb9ed72ac6-catalog-content\") pod \"redhat-marketplace-tzfz8\" (UID: \"bf753a6b-b086-4055-8232-efcb9ed72ac6\") " pod="openshift-marketplace/redhat-marketplace-tzfz8" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.543204 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf753a6b-b086-4055-8232-efcb9ed72ac6-utilities\") pod \"redhat-marketplace-tzfz8\" (UID: \"bf753a6b-b086-4055-8232-efcb9ed72ac6\") " pod="openshift-marketplace/redhat-marketplace-tzfz8" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.543255 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flhbg\" (UniqueName: \"kubernetes.io/projected/bf753a6b-b086-4055-8232-efcb9ed72ac6-kube-api-access-flhbg\") pod \"redhat-marketplace-tzfz8\" (UID: \"bf753a6b-b086-4055-8232-efcb9ed72ac6\") " pod="openshift-marketplace/redhat-marketplace-tzfz8" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.543882 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.543990 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf753a6b-b086-4055-8232-efcb9ed72ac6-utilities\") pod \"redhat-marketplace-tzfz8\" (UID: \"bf753a6b-b086-4055-8232-efcb9ed72ac6\") " pod="openshift-marketplace/redhat-marketplace-tzfz8" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.550159 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf753a6b-b086-4055-8232-efcb9ed72ac6-catalog-content\") pod \"redhat-marketplace-tzfz8\" (UID: \"bf753a6b-b086-4055-8232-efcb9ed72ac6\") " pod="openshift-marketplace/redhat-marketplace-tzfz8" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.553944 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqvpq"] Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.572743 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flhbg\" (UniqueName: \"kubernetes.io/projected/bf753a6b-b086-4055-8232-efcb9ed72ac6-kube-api-access-flhbg\") pod \"redhat-marketplace-tzfz8\" (UID: \"bf753a6b-b086-4055-8232-efcb9ed72ac6\") " pod="openshift-marketplace/redhat-marketplace-tzfz8" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.644333 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v67wv\" (UniqueName: \"kubernetes.io/projected/96a927cc-df8b-4011-8eb6-ab3b2ebdda7a-kube-api-access-v67wv\") pod \"redhat-operators-zqvpq\" (UID: \"96a927cc-df8b-4011-8eb6-ab3b2ebdda7a\") " pod="openshift-marketplace/redhat-operators-zqvpq" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.644403 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96a927cc-df8b-4011-8eb6-ab3b2ebdda7a-utilities\") pod \"redhat-operators-zqvpq\" (UID: \"96a927cc-df8b-4011-8eb6-ab3b2ebdda7a\") " pod="openshift-marketplace/redhat-operators-zqvpq" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.644474 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96a927cc-df8b-4011-8eb6-ab3b2ebdda7a-catalog-content\") pod \"redhat-operators-zqvpq\" (UID: \"96a927cc-df8b-4011-8eb6-ab3b2ebdda7a\") " pod="openshift-marketplace/redhat-operators-zqvpq" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.729870 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tzfz8" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.745934 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v67wv\" (UniqueName: \"kubernetes.io/projected/96a927cc-df8b-4011-8eb6-ab3b2ebdda7a-kube-api-access-v67wv\") pod \"redhat-operators-zqvpq\" (UID: \"96a927cc-df8b-4011-8eb6-ab3b2ebdda7a\") " pod="openshift-marketplace/redhat-operators-zqvpq" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.746019 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96a927cc-df8b-4011-8eb6-ab3b2ebdda7a-utilities\") pod \"redhat-operators-zqvpq\" (UID: \"96a927cc-df8b-4011-8eb6-ab3b2ebdda7a\") " pod="openshift-marketplace/redhat-operators-zqvpq" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.746091 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96a927cc-df8b-4011-8eb6-ab3b2ebdda7a-catalog-content\") pod \"redhat-operators-zqvpq\" (UID: \"96a927cc-df8b-4011-8eb6-ab3b2ebdda7a\") " pod="openshift-marketplace/redhat-operators-zqvpq" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.746920 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96a927cc-df8b-4011-8eb6-ab3b2ebdda7a-utilities\") pod \"redhat-operators-zqvpq\" (UID: \"96a927cc-df8b-4011-8eb6-ab3b2ebdda7a\") " pod="openshift-marketplace/redhat-operators-zqvpq" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.747014 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96a927cc-df8b-4011-8eb6-ab3b2ebdda7a-catalog-content\") pod \"redhat-operators-zqvpq\" (UID: \"96a927cc-df8b-4011-8eb6-ab3b2ebdda7a\") " pod="openshift-marketplace/redhat-operators-zqvpq" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.766219 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v67wv\" (UniqueName: \"kubernetes.io/projected/96a927cc-df8b-4011-8eb6-ab3b2ebdda7a-kube-api-access-v67wv\") pod \"redhat-operators-zqvpq\" (UID: \"96a927cc-df8b-4011-8eb6-ab3b2ebdda7a\") " pod="openshift-marketplace/redhat-operators-zqvpq" Jan 27 09:59:47 crc kubenswrapper[4869]: I0127 09:59:47.867390 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqvpq" Jan 27 09:59:48 crc kubenswrapper[4869]: I0127 09:59:48.149935 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tzfz8"] Jan 27 09:59:48 crc kubenswrapper[4869]: W0127 09:59:48.155758 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf753a6b_b086_4055_8232_efcb9ed72ac6.slice/crio-e5aba0375050ef955d1e549f36e122e6f36918b964af2cb4fd1e9bd72e739da3 WatchSource:0}: Error finding container e5aba0375050ef955d1e549f36e122e6f36918b964af2cb4fd1e9bd72e739da3: Status 404 returned error can't find the container with id e5aba0375050ef955d1e549f36e122e6f36918b964af2cb4fd1e9bd72e739da3 Jan 27 09:59:48 crc kubenswrapper[4869]: I0127 09:59:48.275780 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqvpq"] Jan 27 09:59:48 crc kubenswrapper[4869]: W0127 09:59:48.286397 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96a927cc_df8b_4011_8eb6_ab3b2ebdda7a.slice/crio-4b96b423b5beda60c54539765475fbc0dbce0fef46adc4f88c19e4d75c881d80 WatchSource:0}: Error finding container 4b96b423b5beda60c54539765475fbc0dbce0fef46adc4f88c19e4d75c881d80: Status 404 returned error can't find the container with id 4b96b423b5beda60c54539765475fbc0dbce0fef46adc4f88c19e4d75c881d80 Jan 27 09:59:48 crc kubenswrapper[4869]: I0127 09:59:48.452966 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqvpq" event={"ID":"96a927cc-df8b-4011-8eb6-ab3b2ebdda7a","Type":"ContainerStarted","Data":"54c8728fd8986174e4969ae965e4ee6c46388ac4dcf1c89d7397674430c2a154"} Jan 27 09:59:48 crc kubenswrapper[4869]: I0127 09:59:48.453003 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqvpq" event={"ID":"96a927cc-df8b-4011-8eb6-ab3b2ebdda7a","Type":"ContainerStarted","Data":"4b96b423b5beda60c54539765475fbc0dbce0fef46adc4f88c19e4d75c881d80"} Jan 27 09:59:48 crc kubenswrapper[4869]: I0127 09:59:48.455676 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhg6m" event={"ID":"4ab43b25-6ea3-4061-9c4a-6fb427539d3c","Type":"ContainerStarted","Data":"859280bf72fdff4b2100b04ec58d42521b3003789367e4d530c50a873f457fb0"} Jan 27 09:59:48 crc kubenswrapper[4869]: I0127 09:59:48.457214 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jnkxt" event={"ID":"9051fa8e-7223-46e5-b408-a806a99c45c2","Type":"ContainerStarted","Data":"d9065fefaf7f97f7e472ca7f0e34a6caffb87f5a3d0eed76a64ced55c830fa75"} Jan 27 09:59:48 crc kubenswrapper[4869]: I0127 09:59:48.458051 4869 generic.go:334] "Generic (PLEG): container finished" podID="bf753a6b-b086-4055-8232-efcb9ed72ac6" containerID="9ad09314145f63f10bf2c79135c02dbf006a4125dafe7cee1468fff1718aa7ec" exitCode=0 Jan 27 09:59:48 crc kubenswrapper[4869]: I0127 09:59:48.458086 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzfz8" event={"ID":"bf753a6b-b086-4055-8232-efcb9ed72ac6","Type":"ContainerDied","Data":"9ad09314145f63f10bf2c79135c02dbf006a4125dafe7cee1468fff1718aa7ec"} Jan 27 09:59:48 crc kubenswrapper[4869]: I0127 09:59:48.458103 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzfz8" event={"ID":"bf753a6b-b086-4055-8232-efcb9ed72ac6","Type":"ContainerStarted","Data":"e5aba0375050ef955d1e549f36e122e6f36918b964af2cb4fd1e9bd72e739da3"} Jan 27 09:59:48 crc kubenswrapper[4869]: I0127 09:59:48.487075 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hhg6m" podStartSLOduration=2.039961691 podStartE2EDuration="3.48706092s" podCreationTimestamp="2026-01-27 09:59:45 +0000 UTC" firstStartedPulling="2026-01-27 09:59:46.438962055 +0000 UTC m=+355.059386148" lastFinishedPulling="2026-01-27 09:59:47.886061294 +0000 UTC m=+356.506485377" observedRunningTime="2026-01-27 09:59:48.485625346 +0000 UTC m=+357.106049429" watchObservedRunningTime="2026-01-27 09:59:48.48706092 +0000 UTC m=+357.107485003" Jan 27 09:59:48 crc kubenswrapper[4869]: I0127 09:59:48.521810 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jnkxt" podStartSLOduration=3.085889911 podStartE2EDuration="4.521791274s" podCreationTimestamp="2026-01-27 09:59:44 +0000 UTC" firstStartedPulling="2026-01-27 09:59:46.439599285 +0000 UTC m=+355.060023368" lastFinishedPulling="2026-01-27 09:59:47.875500648 +0000 UTC m=+356.495924731" observedRunningTime="2026-01-27 09:59:48.520670909 +0000 UTC m=+357.141094992" watchObservedRunningTime="2026-01-27 09:59:48.521791274 +0000 UTC m=+357.142215357" Jan 27 09:59:49 crc kubenswrapper[4869]: I0127 09:59:49.466003 4869 generic.go:334] "Generic (PLEG): container finished" podID="bf753a6b-b086-4055-8232-efcb9ed72ac6" containerID="9114a67f071e6f207d3f306f4c601fdb44359e9076a18e1f845cbeb91558651e" exitCode=0 Jan 27 09:59:49 crc kubenswrapper[4869]: I0127 09:59:49.466087 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzfz8" event={"ID":"bf753a6b-b086-4055-8232-efcb9ed72ac6","Type":"ContainerDied","Data":"9114a67f071e6f207d3f306f4c601fdb44359e9076a18e1f845cbeb91558651e"} Jan 27 09:59:49 crc kubenswrapper[4869]: I0127 09:59:49.468647 4869 generic.go:334] "Generic (PLEG): container finished" podID="96a927cc-df8b-4011-8eb6-ab3b2ebdda7a" containerID="54c8728fd8986174e4969ae965e4ee6c46388ac4dcf1c89d7397674430c2a154" exitCode=0 Jan 27 09:59:49 crc kubenswrapper[4869]: I0127 09:59:49.468695 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqvpq" event={"ID":"96a927cc-df8b-4011-8eb6-ab3b2ebdda7a","Type":"ContainerDied","Data":"54c8728fd8986174e4969ae965e4ee6c46388ac4dcf1c89d7397674430c2a154"} Jan 27 09:59:50 crc kubenswrapper[4869]: I0127 09:59:50.476665 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tzfz8" event={"ID":"bf753a6b-b086-4055-8232-efcb9ed72ac6","Type":"ContainerStarted","Data":"567d40d8675405d60c36e94da34556c543fcbc2fab4d37f6284ea91541090731"} Jan 27 09:59:50 crc kubenswrapper[4869]: I0127 09:59:50.479354 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqvpq" event={"ID":"96a927cc-df8b-4011-8eb6-ab3b2ebdda7a","Type":"ContainerStarted","Data":"6c1465d76c2b37336edd4ea7f1bb510ff12249fa1c1ab2786b05d942e7de7d76"} Jan 27 09:59:50 crc kubenswrapper[4869]: I0127 09:59:50.492046 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tzfz8" podStartSLOduration=1.861570875 podStartE2EDuration="3.492028402s" podCreationTimestamp="2026-01-27 09:59:47 +0000 UTC" firstStartedPulling="2026-01-27 09:59:48.458901139 +0000 UTC m=+357.079325222" lastFinishedPulling="2026-01-27 09:59:50.089358666 +0000 UTC m=+358.709782749" observedRunningTime="2026-01-27 09:59:50.490484465 +0000 UTC m=+359.110908558" watchObservedRunningTime="2026-01-27 09:59:50.492028402 +0000 UTC m=+359.112452485" Jan 27 09:59:51 crc kubenswrapper[4869]: I0127 09:59:51.484768 4869 generic.go:334] "Generic (PLEG): container finished" podID="96a927cc-df8b-4011-8eb6-ab3b2ebdda7a" containerID="6c1465d76c2b37336edd4ea7f1bb510ff12249fa1c1ab2786b05d942e7de7d76" exitCode=0 Jan 27 09:59:51 crc kubenswrapper[4869]: I0127 09:59:51.484875 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqvpq" event={"ID":"96a927cc-df8b-4011-8eb6-ab3b2ebdda7a","Type":"ContainerDied","Data":"6c1465d76c2b37336edd4ea7f1bb510ff12249fa1c1ab2786b05d942e7de7d76"} Jan 27 09:59:52 crc kubenswrapper[4869]: I0127 09:59:52.491514 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqvpq" event={"ID":"96a927cc-df8b-4011-8eb6-ab3b2ebdda7a","Type":"ContainerStarted","Data":"f3994e405b76257c6188c477660d2e89d7b7e30bb553160601b859002e9b97d6"} Jan 27 09:59:55 crc kubenswrapper[4869]: I0127 09:59:55.267409 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jnkxt" Jan 27 09:59:55 crc kubenswrapper[4869]: I0127 09:59:55.267709 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jnkxt" Jan 27 09:59:55 crc kubenswrapper[4869]: I0127 09:59:55.320949 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jnkxt" Jan 27 09:59:55 crc kubenswrapper[4869]: I0127 09:59:55.350212 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zqvpq" podStartSLOduration=5.944499176 podStartE2EDuration="8.350186314s" podCreationTimestamp="2026-01-27 09:59:47 +0000 UTC" firstStartedPulling="2026-01-27 09:59:49.470158727 +0000 UTC m=+358.090582800" lastFinishedPulling="2026-01-27 09:59:51.875845865 +0000 UTC m=+360.496269938" observedRunningTime="2026-01-27 09:59:52.507371114 +0000 UTC m=+361.127795207" watchObservedRunningTime="2026-01-27 09:59:55.350186314 +0000 UTC m=+363.970610407" Jan 27 09:59:55 crc kubenswrapper[4869]: I0127 09:59:55.490080 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hhg6m" Jan 27 09:59:55 crc kubenswrapper[4869]: I0127 09:59:55.490148 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hhg6m" Jan 27 09:59:55 crc kubenswrapper[4869]: I0127 09:59:55.529760 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hhg6m" Jan 27 09:59:55 crc kubenswrapper[4869]: I0127 09:59:55.560159 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jnkxt" Jan 27 09:59:55 crc kubenswrapper[4869]: I0127 09:59:55.575043 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hhg6m" Jan 27 09:59:57 crc kubenswrapper[4869]: I0127 09:59:57.730333 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tzfz8" Jan 27 09:59:57 crc kubenswrapper[4869]: I0127 09:59:57.730860 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tzfz8" Jan 27 09:59:57 crc kubenswrapper[4869]: I0127 09:59:57.778455 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tzfz8" Jan 27 09:59:57 crc kubenswrapper[4869]: I0127 09:59:57.867662 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zqvpq" Jan 27 09:59:57 crc kubenswrapper[4869]: I0127 09:59:57.867726 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zqvpq" Jan 27 09:59:58 crc kubenswrapper[4869]: I0127 09:59:58.582610 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tzfz8" Jan 27 09:59:59 crc kubenswrapper[4869]: I0127 09:59:58.905324 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zqvpq" podUID="96a927cc-df8b-4011-8eb6-ab3b2ebdda7a" containerName="registry-server" probeResult="failure" output=< Jan 27 09:59:59 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Jan 27 09:59:59 crc kubenswrapper[4869]: > Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.079557 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw"] Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.079753 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" podUID="8cfae142-2430-4929-b759-bc2b1d409090" containerName="route-controller-manager" containerID="cri-o://3a489395e58978a02ee6a7f336db87bd65e59bee1448fc850cc7fa6ab031ef24" gracePeriod=30 Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.242347 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n"] Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.243236 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.245211 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.245375 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.249474 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n"] Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.300577 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e44d1050-60ab-468f-8716-2d74939a3820-secret-volume\") pod \"collect-profiles-29491800-hfl7n\" (UID: \"e44d1050-60ab-468f-8716-2d74939a3820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.300636 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gbc4\" (UniqueName: \"kubernetes.io/projected/e44d1050-60ab-468f-8716-2d74939a3820-kube-api-access-5gbc4\") pod \"collect-profiles-29491800-hfl7n\" (UID: \"e44d1050-60ab-468f-8716-2d74939a3820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.300659 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e44d1050-60ab-468f-8716-2d74939a3820-config-volume\") pod \"collect-profiles-29491800-hfl7n\" (UID: \"e44d1050-60ab-468f-8716-2d74939a3820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.402859 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e44d1050-60ab-468f-8716-2d74939a3820-secret-volume\") pod \"collect-profiles-29491800-hfl7n\" (UID: \"e44d1050-60ab-468f-8716-2d74939a3820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.402984 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gbc4\" (UniqueName: \"kubernetes.io/projected/e44d1050-60ab-468f-8716-2d74939a3820-kube-api-access-5gbc4\") pod \"collect-profiles-29491800-hfl7n\" (UID: \"e44d1050-60ab-468f-8716-2d74939a3820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.403025 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e44d1050-60ab-468f-8716-2d74939a3820-config-volume\") pod \"collect-profiles-29491800-hfl7n\" (UID: \"e44d1050-60ab-468f-8716-2d74939a3820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.405325 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e44d1050-60ab-468f-8716-2d74939a3820-config-volume\") pod \"collect-profiles-29491800-hfl7n\" (UID: \"e44d1050-60ab-468f-8716-2d74939a3820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.413355 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e44d1050-60ab-468f-8716-2d74939a3820-secret-volume\") pod \"collect-profiles-29491800-hfl7n\" (UID: \"e44d1050-60ab-468f-8716-2d74939a3820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.425697 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gbc4\" (UniqueName: \"kubernetes.io/projected/e44d1050-60ab-468f-8716-2d74939a3820-kube-api-access-5gbc4\") pod \"collect-profiles-29491800-hfl7n\" (UID: \"e44d1050-60ab-468f-8716-2d74939a3820\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.457581 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.503671 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cfae142-2430-4929-b759-bc2b1d409090-serving-cert\") pod \"8cfae142-2430-4929-b759-bc2b1d409090\" (UID: \"8cfae142-2430-4929-b759-bc2b1d409090\") " Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.503729 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-745s9\" (UniqueName: \"kubernetes.io/projected/8cfae142-2430-4929-b759-bc2b1d409090-kube-api-access-745s9\") pod \"8cfae142-2430-4929-b759-bc2b1d409090\" (UID: \"8cfae142-2430-4929-b759-bc2b1d409090\") " Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.503774 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cfae142-2430-4929-b759-bc2b1d409090-client-ca\") pod \"8cfae142-2430-4929-b759-bc2b1d409090\" (UID: \"8cfae142-2430-4929-b759-bc2b1d409090\") " Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.503794 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfae142-2430-4929-b759-bc2b1d409090-config\") pod \"8cfae142-2430-4929-b759-bc2b1d409090\" (UID: \"8cfae142-2430-4929-b759-bc2b1d409090\") " Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.504887 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfae142-2430-4929-b759-bc2b1d409090-config" (OuterVolumeSpecName: "config") pod "8cfae142-2430-4929-b759-bc2b1d409090" (UID: "8cfae142-2430-4929-b759-bc2b1d409090"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.505316 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfae142-2430-4929-b759-bc2b1d409090-client-ca" (OuterVolumeSpecName: "client-ca") pod "8cfae142-2430-4929-b759-bc2b1d409090" (UID: "8cfae142-2430-4929-b759-bc2b1d409090"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.507142 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cfae142-2430-4929-b759-bc2b1d409090-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cfae142-2430-4929-b759-bc2b1d409090" (UID: "8cfae142-2430-4929-b759-bc2b1d409090"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.507173 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cfae142-2430-4929-b759-bc2b1d409090-kube-api-access-745s9" (OuterVolumeSpecName: "kube-api-access-745s9") pod "8cfae142-2430-4929-b759-bc2b1d409090" (UID: "8cfae142-2430-4929-b759-bc2b1d409090"). InnerVolumeSpecName "kube-api-access-745s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.547687 4869 generic.go:334] "Generic (PLEG): container finished" podID="8cfae142-2430-4929-b759-bc2b1d409090" containerID="3a489395e58978a02ee6a7f336db87bd65e59bee1448fc850cc7fa6ab031ef24" exitCode=0 Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.547725 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" event={"ID":"8cfae142-2430-4929-b759-bc2b1d409090","Type":"ContainerDied","Data":"3a489395e58978a02ee6a7f336db87bd65e59bee1448fc850cc7fa6ab031ef24"} Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.547754 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" event={"ID":"8cfae142-2430-4929-b759-bc2b1d409090","Type":"ContainerDied","Data":"c7505f143cd60fe582df4e1b3c4303ae39e21a401a8c2e1bdb85d5fafe40a3b0"} Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.547773 4869 scope.go:117] "RemoveContainer" containerID="3a489395e58978a02ee6a7f336db87bd65e59bee1448fc850cc7fa6ab031ef24" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.547904 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.560168 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.591310 4869 scope.go:117] "RemoveContainer" containerID="3a489395e58978a02ee6a7f336db87bd65e59bee1448fc850cc7fa6ab031ef24" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.593015 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw"] Jan 27 10:00:00 crc kubenswrapper[4869]: E0127 10:00:00.593302 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a489395e58978a02ee6a7f336db87bd65e59bee1448fc850cc7fa6ab031ef24\": container with ID starting with 3a489395e58978a02ee6a7f336db87bd65e59bee1448fc850cc7fa6ab031ef24 not found: ID does not exist" containerID="3a489395e58978a02ee6a7f336db87bd65e59bee1448fc850cc7fa6ab031ef24" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.593345 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a489395e58978a02ee6a7f336db87bd65e59bee1448fc850cc7fa6ab031ef24"} err="failed to get container status \"3a489395e58978a02ee6a7f336db87bd65e59bee1448fc850cc7fa6ab031ef24\": rpc error: code = NotFound desc = could not find container \"3a489395e58978a02ee6a7f336db87bd65e59bee1448fc850cc7fa6ab031ef24\": container with ID starting with 3a489395e58978a02ee6a7f336db87bd65e59bee1448fc850cc7fa6ab031ef24 not found: ID does not exist" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.597624 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c9bcf89cc-drfnw"] Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.605815 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cfae142-2430-4929-b759-bc2b1d409090-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.605900 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-745s9\" (UniqueName: \"kubernetes.io/projected/8cfae142-2430-4929-b759-bc2b1d409090-kube-api-access-745s9\") on node \"crc\" DevicePath \"\"" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.605914 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cfae142-2430-4929-b759-bc2b1d409090-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 10:00:00 crc kubenswrapper[4869]: I0127 10:00:00.605924 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfae142-2430-4929-b759-bc2b1d409090-config\") on node \"crc\" DevicePath \"\"" Jan 27 10:00:01 crc kubenswrapper[4869]: I0127 10:00:01.056774 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n"] Jan 27 10:00:01 crc kubenswrapper[4869]: I0127 10:00:01.556025 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n" event={"ID":"e44d1050-60ab-468f-8716-2d74939a3820","Type":"ContainerStarted","Data":"6577c34c76ce03b9b7c660fe77d9b9d79a5590af76fc29973182e32afb76235c"} Jan 27 10:00:01 crc kubenswrapper[4869]: I0127 10:00:01.556488 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n" event={"ID":"e44d1050-60ab-468f-8716-2d74939a3820","Type":"ContainerStarted","Data":"c0c710b4758d25b59336c304a8131a22ead251e77e7fbf887b495eec8e7af9f6"} Jan 27 10:00:01 crc kubenswrapper[4869]: I0127 10:00:01.922801 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4"] Jan 27 10:00:01 crc kubenswrapper[4869]: E0127 10:00:01.923041 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cfae142-2430-4929-b759-bc2b1d409090" containerName="route-controller-manager" Jan 27 10:00:01 crc kubenswrapper[4869]: I0127 10:00:01.923056 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cfae142-2430-4929-b759-bc2b1d409090" containerName="route-controller-manager" Jan 27 10:00:01 crc kubenswrapper[4869]: I0127 10:00:01.923383 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cfae142-2430-4929-b759-bc2b1d409090" containerName="route-controller-manager" Jan 27 10:00:01 crc kubenswrapper[4869]: I0127 10:00:01.923770 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" Jan 27 10:00:01 crc kubenswrapper[4869]: I0127 10:00:01.925371 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 10:00:01 crc kubenswrapper[4869]: I0127 10:00:01.926358 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 10:00:01 crc kubenswrapper[4869]: I0127 10:00:01.926773 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 10:00:01 crc kubenswrapper[4869]: I0127 10:00:01.926980 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 10:00:01 crc kubenswrapper[4869]: I0127 10:00:01.927066 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 10:00:01 crc kubenswrapper[4869]: I0127 10:00:01.928460 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 10:00:01 crc kubenswrapper[4869]: I0127 10:00:01.938204 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4"] Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.022310 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e0f7f56-38c3-4f37-828e-0ce1b125d556-client-ca\") pod \"route-controller-manager-d776c45b8-262w4\" (UID: \"4e0f7f56-38c3-4f37-828e-0ce1b125d556\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.022357 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e0f7f56-38c3-4f37-828e-0ce1b125d556-config\") pod \"route-controller-manager-d776c45b8-262w4\" (UID: \"4e0f7f56-38c3-4f37-828e-0ce1b125d556\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.022377 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e0f7f56-38c3-4f37-828e-0ce1b125d556-serving-cert\") pod \"route-controller-manager-d776c45b8-262w4\" (UID: \"4e0f7f56-38c3-4f37-828e-0ce1b125d556\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.022399 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mtlg\" (UniqueName: \"kubernetes.io/projected/4e0f7f56-38c3-4f37-828e-0ce1b125d556-kube-api-access-8mtlg\") pod \"route-controller-manager-d776c45b8-262w4\" (UID: \"4e0f7f56-38c3-4f37-828e-0ce1b125d556\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.043672 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cfae142-2430-4929-b759-bc2b1d409090" path="/var/lib/kubelet/pods/8cfae142-2430-4929-b759-bc2b1d409090/volumes" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.124237 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e0f7f56-38c3-4f37-828e-0ce1b125d556-client-ca\") pod \"route-controller-manager-d776c45b8-262w4\" (UID: \"4e0f7f56-38c3-4f37-828e-0ce1b125d556\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.124658 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e0f7f56-38c3-4f37-828e-0ce1b125d556-config\") pod \"route-controller-manager-d776c45b8-262w4\" (UID: \"4e0f7f56-38c3-4f37-828e-0ce1b125d556\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.124681 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e0f7f56-38c3-4f37-828e-0ce1b125d556-serving-cert\") pod \"route-controller-manager-d776c45b8-262w4\" (UID: \"4e0f7f56-38c3-4f37-828e-0ce1b125d556\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.124698 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mtlg\" (UniqueName: \"kubernetes.io/projected/4e0f7f56-38c3-4f37-828e-0ce1b125d556-kube-api-access-8mtlg\") pod \"route-controller-manager-d776c45b8-262w4\" (UID: \"4e0f7f56-38c3-4f37-828e-0ce1b125d556\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.127084 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e0f7f56-38c3-4f37-828e-0ce1b125d556-client-ca\") pod \"route-controller-manager-d776c45b8-262w4\" (UID: \"4e0f7f56-38c3-4f37-828e-0ce1b125d556\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.127315 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e0f7f56-38c3-4f37-828e-0ce1b125d556-config\") pod \"route-controller-manager-d776c45b8-262w4\" (UID: \"4e0f7f56-38c3-4f37-828e-0ce1b125d556\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.139895 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cfvs2"] Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.140752 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.140819 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e0f7f56-38c3-4f37-828e-0ce1b125d556-serving-cert\") pod \"route-controller-manager-d776c45b8-262w4\" (UID: \"4e0f7f56-38c3-4f37-828e-0ce1b125d556\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.151708 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mtlg\" (UniqueName: \"kubernetes.io/projected/4e0f7f56-38c3-4f37-828e-0ce1b125d556-kube-api-access-8mtlg\") pod \"route-controller-manager-d776c45b8-262w4\" (UID: \"4e0f7f56-38c3-4f37-828e-0ce1b125d556\") " pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.160332 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cfvs2"] Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.226521 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0814c523-762c-4bef-9b38-bd4bbd965a7a-registry-certificates\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.226596 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0814c523-762c-4bef-9b38-bd4bbd965a7a-trusted-ca\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.226625 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-658f7\" (UniqueName: \"kubernetes.io/projected/0814c523-762c-4bef-9b38-bd4bbd965a7a-kube-api-access-658f7\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.226825 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.226942 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0814c523-762c-4bef-9b38-bd4bbd965a7a-registry-tls\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.227178 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0814c523-762c-4bef-9b38-bd4bbd965a7a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.227332 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0814c523-762c-4bef-9b38-bd4bbd965a7a-bound-sa-token\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.227583 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0814c523-762c-4bef-9b38-bd4bbd965a7a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.252800 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.255282 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.328992 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0814c523-762c-4bef-9b38-bd4bbd965a7a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.329093 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0814c523-762c-4bef-9b38-bd4bbd965a7a-registry-certificates\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.329124 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0814c523-762c-4bef-9b38-bd4bbd965a7a-trusted-ca\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.329147 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-658f7\" (UniqueName: \"kubernetes.io/projected/0814c523-762c-4bef-9b38-bd4bbd965a7a-kube-api-access-658f7\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.329190 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0814c523-762c-4bef-9b38-bd4bbd965a7a-registry-tls\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.329243 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0814c523-762c-4bef-9b38-bd4bbd965a7a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.329264 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0814c523-762c-4bef-9b38-bd4bbd965a7a-bound-sa-token\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.329717 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0814c523-762c-4bef-9b38-bd4bbd965a7a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.331769 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0814c523-762c-4bef-9b38-bd4bbd965a7a-trusted-ca\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.332728 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0814c523-762c-4bef-9b38-bd4bbd965a7a-registry-certificates\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.333935 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0814c523-762c-4bef-9b38-bd4bbd965a7a-registry-tls\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.340492 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0814c523-762c-4bef-9b38-bd4bbd965a7a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.351357 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-658f7\" (UniqueName: \"kubernetes.io/projected/0814c523-762c-4bef-9b38-bd4bbd965a7a-kube-api-access-658f7\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.351518 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0814c523-762c-4bef-9b38-bd4bbd965a7a-bound-sa-token\") pod \"image-registry-66df7c8f76-cfvs2\" (UID: \"0814c523-762c-4bef-9b38-bd4bbd965a7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.505133 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.564454 4869 generic.go:334] "Generic (PLEG): container finished" podID="e44d1050-60ab-468f-8716-2d74939a3820" containerID="6577c34c76ce03b9b7c660fe77d9b9d79a5590af76fc29973182e32afb76235c" exitCode=0 Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.564497 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n" event={"ID":"e44d1050-60ab-468f-8716-2d74939a3820","Type":"ContainerDied","Data":"6577c34c76ce03b9b7c660fe77d9b9d79a5590af76fc29973182e32afb76235c"} Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.664711 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4"] Jan 27 10:00:02 crc kubenswrapper[4869]: W0127 10:00:02.674533 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e0f7f56_38c3_4f37_828e_0ce1b125d556.slice/crio-020fdf32755642895bec48dd20e982fa20ac621821d10b1f9ef797af3e719121 WatchSource:0}: Error finding container 020fdf32755642895bec48dd20e982fa20ac621821d10b1f9ef797af3e719121: Status 404 returned error can't find the container with id 020fdf32755642895bec48dd20e982fa20ac621821d10b1f9ef797af3e719121 Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.825987 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.915783 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-cfvs2"] Jan 27 10:00:02 crc kubenswrapper[4869]: W0127 10:00:02.919245 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0814c523_762c_4bef_9b38_bd4bbd965a7a.slice/crio-253c2b82788fc14808d17869cedfe5e4debc0e8f4cb302a36ca5104f10db5490 WatchSource:0}: Error finding container 253c2b82788fc14808d17869cedfe5e4debc0e8f4cb302a36ca5104f10db5490: Status 404 returned error can't find the container with id 253c2b82788fc14808d17869cedfe5e4debc0e8f4cb302a36ca5104f10db5490 Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.936081 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gbc4\" (UniqueName: \"kubernetes.io/projected/e44d1050-60ab-468f-8716-2d74939a3820-kube-api-access-5gbc4\") pod \"e44d1050-60ab-468f-8716-2d74939a3820\" (UID: \"e44d1050-60ab-468f-8716-2d74939a3820\") " Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.936171 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e44d1050-60ab-468f-8716-2d74939a3820-config-volume\") pod \"e44d1050-60ab-468f-8716-2d74939a3820\" (UID: \"e44d1050-60ab-468f-8716-2d74939a3820\") " Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.936232 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e44d1050-60ab-468f-8716-2d74939a3820-secret-volume\") pod \"e44d1050-60ab-468f-8716-2d74939a3820\" (UID: \"e44d1050-60ab-468f-8716-2d74939a3820\") " Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.937870 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e44d1050-60ab-468f-8716-2d74939a3820-config-volume" (OuterVolumeSpecName: "config-volume") pod "e44d1050-60ab-468f-8716-2d74939a3820" (UID: "e44d1050-60ab-468f-8716-2d74939a3820"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.941471 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e44d1050-60ab-468f-8716-2d74939a3820-kube-api-access-5gbc4" (OuterVolumeSpecName: "kube-api-access-5gbc4") pod "e44d1050-60ab-468f-8716-2d74939a3820" (UID: "e44d1050-60ab-468f-8716-2d74939a3820"). InnerVolumeSpecName "kube-api-access-5gbc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:00:02 crc kubenswrapper[4869]: I0127 10:00:02.941700 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e44d1050-60ab-468f-8716-2d74939a3820-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e44d1050-60ab-468f-8716-2d74939a3820" (UID: "e44d1050-60ab-468f-8716-2d74939a3820"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:00:03 crc kubenswrapper[4869]: I0127 10:00:03.037486 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e44d1050-60ab-468f-8716-2d74939a3820-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 10:00:03 crc kubenswrapper[4869]: I0127 10:00:03.037533 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gbc4\" (UniqueName: \"kubernetes.io/projected/e44d1050-60ab-468f-8716-2d74939a3820-kube-api-access-5gbc4\") on node \"crc\" DevicePath \"\"" Jan 27 10:00:03 crc kubenswrapper[4869]: I0127 10:00:03.037543 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e44d1050-60ab-468f-8716-2d74939a3820-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 10:00:03 crc kubenswrapper[4869]: I0127 10:00:03.570581 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n" event={"ID":"e44d1050-60ab-468f-8716-2d74939a3820","Type":"ContainerDied","Data":"c0c710b4758d25b59336c304a8131a22ead251e77e7fbf887b495eec8e7af9f6"} Jan 27 10:00:03 crc kubenswrapper[4869]: I0127 10:00:03.570628 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0c710b4758d25b59336c304a8131a22ead251e77e7fbf887b495eec8e7af9f6" Jan 27 10:00:03 crc kubenswrapper[4869]: I0127 10:00:03.570629 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n" Jan 27 10:00:03 crc kubenswrapper[4869]: I0127 10:00:03.572953 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" event={"ID":"0814c523-762c-4bef-9b38-bd4bbd965a7a","Type":"ContainerStarted","Data":"7bc39344310eb25e0f74fe5430f84e284a095589fa16e0af1e7ba014a1930b85"} Jan 27 10:00:03 crc kubenswrapper[4869]: I0127 10:00:03.573120 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:03 crc kubenswrapper[4869]: I0127 10:00:03.573209 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" event={"ID":"0814c523-762c-4bef-9b38-bd4bbd965a7a","Type":"ContainerStarted","Data":"253c2b82788fc14808d17869cedfe5e4debc0e8f4cb302a36ca5104f10db5490"} Jan 27 10:00:03 crc kubenswrapper[4869]: I0127 10:00:03.574260 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" event={"ID":"4e0f7f56-38c3-4f37-828e-0ce1b125d556","Type":"ContainerStarted","Data":"1c3bfbc2a8c73b3be86fab25be777afc382f74e65bbb7538d8cff00aa3c130ef"} Jan 27 10:00:03 crc kubenswrapper[4869]: I0127 10:00:03.574288 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" event={"ID":"4e0f7f56-38c3-4f37-828e-0ce1b125d556","Type":"ContainerStarted","Data":"020fdf32755642895bec48dd20e982fa20ac621821d10b1f9ef797af3e719121"} Jan 27 10:00:03 crc kubenswrapper[4869]: I0127 10:00:03.574459 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" Jan 27 10:00:03 crc kubenswrapper[4869]: I0127 10:00:03.579564 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" Jan 27 10:00:03 crc kubenswrapper[4869]: I0127 10:00:03.605398 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" podStartSLOduration=1.605379815 podStartE2EDuration="1.605379815s" podCreationTimestamp="2026-01-27 10:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:00:03.598334318 +0000 UTC m=+372.218758421" watchObservedRunningTime="2026-01-27 10:00:03.605379815 +0000 UTC m=+372.225803898" Jan 27 10:00:03 crc kubenswrapper[4869]: I0127 10:00:03.617121 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d776c45b8-262w4" podStartSLOduration=3.6171049379999998 podStartE2EDuration="3.617104938s" podCreationTimestamp="2026-01-27 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:00:03.611601278 +0000 UTC m=+372.232025371" watchObservedRunningTime="2026-01-27 10:00:03.617104938 +0000 UTC m=+372.237529021" Jan 27 10:00:07 crc kubenswrapper[4869]: I0127 10:00:07.906707 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zqvpq" Jan 27 10:00:07 crc kubenswrapper[4869]: I0127 10:00:07.965131 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zqvpq" Jan 27 10:00:15 crc kubenswrapper[4869]: I0127 10:00:15.697959 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:00:15 crc kubenswrapper[4869]: I0127 10:00:15.699168 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:00:22 crc kubenswrapper[4869]: I0127 10:00:22.510250 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-cfvs2" Jan 27 10:00:22 crc kubenswrapper[4869]: I0127 10:00:22.568575 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jsrbp"] Jan 27 10:00:45 crc kubenswrapper[4869]: I0127 10:00:45.698388 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:00:45 crc kubenswrapper[4869]: I0127 10:00:45.699844 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:00:45 crc kubenswrapper[4869]: I0127 10:00:45.699992 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 10:00:45 crc kubenswrapper[4869]: I0127 10:00:45.700642 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2c22e450bc36c30d04521d1630cd32b9d97fe2a4e5e905590b0f57351fdac38"} pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:00:45 crc kubenswrapper[4869]: I0127 10:00:45.700772 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" containerID="cri-o://b2c22e450bc36c30d04521d1630cd32b9d97fe2a4e5e905590b0f57351fdac38" gracePeriod=600 Jan 27 10:00:46 crc kubenswrapper[4869]: I0127 10:00:46.811289 4869 generic.go:334] "Generic (PLEG): container finished" podID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerID="b2c22e450bc36c30d04521d1630cd32b9d97fe2a4e5e905590b0f57351fdac38" exitCode=0 Jan 27 10:00:46 crc kubenswrapper[4869]: I0127 10:00:46.811406 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerDied","Data":"b2c22e450bc36c30d04521d1630cd32b9d97fe2a4e5e905590b0f57351fdac38"} Jan 27 10:00:46 crc kubenswrapper[4869]: I0127 10:00:46.811611 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerStarted","Data":"e4e1f681e75097eec891089999a26357ca0f39f6b81a768157a4ab35694ce21e"} Jan 27 10:00:46 crc kubenswrapper[4869]: I0127 10:00:46.811646 4869 scope.go:117] "RemoveContainer" containerID="c28d22757e1bcc8e1424c7659c4f3123487f927c7387e2670a206507fbac32c5" Jan 27 10:00:47 crc kubenswrapper[4869]: I0127 10:00:47.611369 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" podUID="61352dfb-6006-4c3f-b404-b32f8a54c08d" containerName="registry" containerID="cri-o://d257b8a2ac9177f08d130f151f9799a355fa2fd1049395d02bfab94f141b8644" gracePeriod=30 Jan 27 10:00:47 crc kubenswrapper[4869]: I0127 10:00:47.821283 4869 generic.go:334] "Generic (PLEG): container finished" podID="61352dfb-6006-4c3f-b404-b32f8a54c08d" containerID="d257b8a2ac9177f08d130f151f9799a355fa2fd1049395d02bfab94f141b8644" exitCode=0 Jan 27 10:00:47 crc kubenswrapper[4869]: I0127 10:00:47.821373 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" event={"ID":"61352dfb-6006-4c3f-b404-b32f8a54c08d","Type":"ContainerDied","Data":"d257b8a2ac9177f08d130f151f9799a355fa2fd1049395d02bfab94f141b8644"} Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.016717 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.172192 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxw6v\" (UniqueName: \"kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-kube-api-access-dxw6v\") pod \"61352dfb-6006-4c3f-b404-b32f8a54c08d\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.172252 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/61352dfb-6006-4c3f-b404-b32f8a54c08d-ca-trust-extracted\") pod \"61352dfb-6006-4c3f-b404-b32f8a54c08d\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.172299 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/61352dfb-6006-4c3f-b404-b32f8a54c08d-installation-pull-secrets\") pod \"61352dfb-6006-4c3f-b404-b32f8a54c08d\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.172335 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/61352dfb-6006-4c3f-b404-b32f8a54c08d-registry-certificates\") pod \"61352dfb-6006-4c3f-b404-b32f8a54c08d\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.172355 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-bound-sa-token\") pod \"61352dfb-6006-4c3f-b404-b32f8a54c08d\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.172512 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"61352dfb-6006-4c3f-b404-b32f8a54c08d\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.172545 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61352dfb-6006-4c3f-b404-b32f8a54c08d-trusted-ca\") pod \"61352dfb-6006-4c3f-b404-b32f8a54c08d\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.172571 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-registry-tls\") pod \"61352dfb-6006-4c3f-b404-b32f8a54c08d\" (UID: \"61352dfb-6006-4c3f-b404-b32f8a54c08d\") " Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.177702 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61352dfb-6006-4c3f-b404-b32f8a54c08d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "61352dfb-6006-4c3f-b404-b32f8a54c08d" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.177713 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61352dfb-6006-4c3f-b404-b32f8a54c08d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "61352dfb-6006-4c3f-b404-b32f8a54c08d" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.179299 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "61352dfb-6006-4c3f-b404-b32f8a54c08d" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.179876 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "61352dfb-6006-4c3f-b404-b32f8a54c08d" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.180508 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61352dfb-6006-4c3f-b404-b32f8a54c08d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "61352dfb-6006-4c3f-b404-b32f8a54c08d" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.182686 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-kube-api-access-dxw6v" (OuterVolumeSpecName: "kube-api-access-dxw6v") pod "61352dfb-6006-4c3f-b404-b32f8a54c08d" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d"). InnerVolumeSpecName "kube-api-access-dxw6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.190436 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61352dfb-6006-4c3f-b404-b32f8a54c08d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "61352dfb-6006-4c3f-b404-b32f8a54c08d" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.191462 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "61352dfb-6006-4c3f-b404-b32f8a54c08d" (UID: "61352dfb-6006-4c3f-b404-b32f8a54c08d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.273610 4869 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/61352dfb-6006-4c3f-b404-b32f8a54c08d-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.273649 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.273663 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61352dfb-6006-4c3f-b404-b32f8a54c08d-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.273675 4869 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.273686 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxw6v\" (UniqueName: \"kubernetes.io/projected/61352dfb-6006-4c3f-b404-b32f8a54c08d-kube-api-access-dxw6v\") on node \"crc\" DevicePath \"\"" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.273695 4869 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/61352dfb-6006-4c3f-b404-b32f8a54c08d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.273704 4869 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/61352dfb-6006-4c3f-b404-b32f8a54c08d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.831290 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" event={"ID":"61352dfb-6006-4c3f-b404-b32f8a54c08d","Type":"ContainerDied","Data":"b91aab27c87e1112ea98a0683657d8269cc0a4444b07b34d4d29701d44745813"} Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.831353 4869 scope.go:117] "RemoveContainer" containerID="d257b8a2ac9177f08d130f151f9799a355fa2fd1049395d02bfab94f141b8644" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.831605 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jsrbp" Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.873321 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jsrbp"] Jan 27 10:00:48 crc kubenswrapper[4869]: I0127 10:00:48.878114 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jsrbp"] Jan 27 10:00:50 crc kubenswrapper[4869]: I0127 10:00:50.040683 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61352dfb-6006-4c3f-b404-b32f8a54c08d" path="/var/lib/kubelet/pods/61352dfb-6006-4c3f-b404-b32f8a54c08d/volumes" Jan 27 10:02:45 crc kubenswrapper[4869]: I0127 10:02:45.698297 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:02:45 crc kubenswrapper[4869]: I0127 10:02:45.698924 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:02:52 crc kubenswrapper[4869]: I0127 10:02:52.220477 4869 scope.go:117] "RemoveContainer" containerID="f73a5ed74beeda10d95f73b5d70d6ee501eb273a2472a151a130ad8a49c6466b" Jan 27 10:03:15 crc kubenswrapper[4869]: I0127 10:03:15.698376 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:03:15 crc kubenswrapper[4869]: I0127 10:03:15.699069 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:03:45 crc kubenswrapper[4869]: I0127 10:03:45.697752 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:03:45 crc kubenswrapper[4869]: I0127 10:03:45.698303 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:03:45 crc kubenswrapper[4869]: I0127 10:03:45.698350 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 10:03:45 crc kubenswrapper[4869]: I0127 10:03:45.698960 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4e1f681e75097eec891089999a26357ca0f39f6b81a768157a4ab35694ce21e"} pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:03:45 crc kubenswrapper[4869]: I0127 10:03:45.699057 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" containerID="cri-o://e4e1f681e75097eec891089999a26357ca0f39f6b81a768157a4ab35694ce21e" gracePeriod=600 Jan 27 10:03:45 crc kubenswrapper[4869]: I0127 10:03:45.953042 4869 generic.go:334] "Generic (PLEG): container finished" podID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerID="e4e1f681e75097eec891089999a26357ca0f39f6b81a768157a4ab35694ce21e" exitCode=0 Jan 27 10:03:45 crc kubenswrapper[4869]: I0127 10:03:45.953098 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerDied","Data":"e4e1f681e75097eec891089999a26357ca0f39f6b81a768157a4ab35694ce21e"} Jan 27 10:03:45 crc kubenswrapper[4869]: I0127 10:03:45.953367 4869 scope.go:117] "RemoveContainer" containerID="b2c22e450bc36c30d04521d1630cd32b9d97fe2a4e5e905590b0f57351fdac38" Jan 27 10:03:46 crc kubenswrapper[4869]: I0127 10:03:46.960139 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerStarted","Data":"4a99f8d4039d41e36670df28e70519808f43f55b1ba2158821f11696774fdec4"} Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.322150 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vgctl"] Jan 27 10:03:50 crc kubenswrapper[4869]: E0127 10:03:50.322821 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61352dfb-6006-4c3f-b404-b32f8a54c08d" containerName="registry" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.322859 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="61352dfb-6006-4c3f-b404-b32f8a54c08d" containerName="registry" Jan 27 10:03:50 crc kubenswrapper[4869]: E0127 10:03:50.322889 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e44d1050-60ab-468f-8716-2d74939a3820" containerName="collect-profiles" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.322897 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e44d1050-60ab-468f-8716-2d74939a3820" containerName="collect-profiles" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.323013 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e44d1050-60ab-468f-8716-2d74939a3820" containerName="collect-profiles" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.323035 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="61352dfb-6006-4c3f-b404-b32f8a54c08d" containerName="registry" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.323446 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vgctl" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.326952 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-gdtvw"] Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.327622 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gdtvw" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.329808 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vgctl"] Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.337027 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.342411 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-9lnwf" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.344380 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.344809 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2n7cz" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.357257 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-sbsgl"] Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.363172 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-sbsgl" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.364822 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-zpxj7" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.381273 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gdtvw"] Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.396057 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-sbsgl"] Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.470655 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5cd7\" (UniqueName: \"kubernetes.io/projected/ae0abba1-31d1-4b88-92d9-4ddf5a80a00c-kube-api-access-v5cd7\") pod \"cert-manager-858654f9db-gdtvw\" (UID: \"ae0abba1-31d1-4b88-92d9-4ddf5a80a00c\") " pod="cert-manager/cert-manager-858654f9db-gdtvw" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.470706 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh7sj\" (UniqueName: \"kubernetes.io/projected/d066acfd-4f90-4e0d-a241-03b54d7d2ca3-kube-api-access-jh7sj\") pod \"cert-manager-cainjector-cf98fcc89-vgctl\" (UID: \"d066acfd-4f90-4e0d-a241-03b54d7d2ca3\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vgctl" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.470817 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmdtw\" (UniqueName: \"kubernetes.io/projected/8d0480d6-1706-48a9-ba8e-baa99011f330-kube-api-access-lmdtw\") pod \"cert-manager-webhook-687f57d79b-sbsgl\" (UID: \"8d0480d6-1706-48a9-ba8e-baa99011f330\") " pod="cert-manager/cert-manager-webhook-687f57d79b-sbsgl" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.572237 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmdtw\" (UniqueName: \"kubernetes.io/projected/8d0480d6-1706-48a9-ba8e-baa99011f330-kube-api-access-lmdtw\") pod \"cert-manager-webhook-687f57d79b-sbsgl\" (UID: \"8d0480d6-1706-48a9-ba8e-baa99011f330\") " pod="cert-manager/cert-manager-webhook-687f57d79b-sbsgl" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.572327 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5cd7\" (UniqueName: \"kubernetes.io/projected/ae0abba1-31d1-4b88-92d9-4ddf5a80a00c-kube-api-access-v5cd7\") pod \"cert-manager-858654f9db-gdtvw\" (UID: \"ae0abba1-31d1-4b88-92d9-4ddf5a80a00c\") " pod="cert-manager/cert-manager-858654f9db-gdtvw" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.572351 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh7sj\" (UniqueName: \"kubernetes.io/projected/d066acfd-4f90-4e0d-a241-03b54d7d2ca3-kube-api-access-jh7sj\") pod \"cert-manager-cainjector-cf98fcc89-vgctl\" (UID: \"d066acfd-4f90-4e0d-a241-03b54d7d2ca3\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vgctl" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.590214 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmdtw\" (UniqueName: \"kubernetes.io/projected/8d0480d6-1706-48a9-ba8e-baa99011f330-kube-api-access-lmdtw\") pod \"cert-manager-webhook-687f57d79b-sbsgl\" (UID: \"8d0480d6-1706-48a9-ba8e-baa99011f330\") " pod="cert-manager/cert-manager-webhook-687f57d79b-sbsgl" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.596352 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5cd7\" (UniqueName: \"kubernetes.io/projected/ae0abba1-31d1-4b88-92d9-4ddf5a80a00c-kube-api-access-v5cd7\") pod \"cert-manager-858654f9db-gdtvw\" (UID: \"ae0abba1-31d1-4b88-92d9-4ddf5a80a00c\") " pod="cert-manager/cert-manager-858654f9db-gdtvw" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.598014 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh7sj\" (UniqueName: \"kubernetes.io/projected/d066acfd-4f90-4e0d-a241-03b54d7d2ca3-kube-api-access-jh7sj\") pod \"cert-manager-cainjector-cf98fcc89-vgctl\" (UID: \"d066acfd-4f90-4e0d-a241-03b54d7d2ca3\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-vgctl" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.639741 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vgctl" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.647988 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gdtvw" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.682860 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-sbsgl" Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.919899 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-sbsgl"] Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.931284 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 10:03:50 crc kubenswrapper[4869]: I0127 10:03:50.981918 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-sbsgl" event={"ID":"8d0480d6-1706-48a9-ba8e-baa99011f330","Type":"ContainerStarted","Data":"568478c5492dbf06fe33bbcca2ecc0fe60beabbb3af19f118bbe60507e240475"} Jan 27 10:03:51 crc kubenswrapper[4869]: I0127 10:03:51.120601 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-vgctl"] Jan 27 10:03:51 crc kubenswrapper[4869]: I0127 10:03:51.124514 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gdtvw"] Jan 27 10:03:51 crc kubenswrapper[4869]: W0127 10:03:51.125281 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd066acfd_4f90_4e0d_a241_03b54d7d2ca3.slice/crio-f9cdfa190be1846414d49988234ec14925ba15fce43dee5f9c579efe3f6be19e WatchSource:0}: Error finding container f9cdfa190be1846414d49988234ec14925ba15fce43dee5f9c579efe3f6be19e: Status 404 returned error can't find the container with id f9cdfa190be1846414d49988234ec14925ba15fce43dee5f9c579efe3f6be19e Jan 27 10:03:51 crc kubenswrapper[4869]: W0127 10:03:51.127433 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae0abba1_31d1_4b88_92d9_4ddf5a80a00c.slice/crio-e044e2898961541115113508a879572685e877abd8a3c787a1fdc77b710f0f37 WatchSource:0}: Error finding container e044e2898961541115113508a879572685e877abd8a3c787a1fdc77b710f0f37: Status 404 returned error can't find the container with id e044e2898961541115113508a879572685e877abd8a3c787a1fdc77b710f0f37 Jan 27 10:03:51 crc kubenswrapper[4869]: I0127 10:03:51.989681 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vgctl" event={"ID":"d066acfd-4f90-4e0d-a241-03b54d7d2ca3","Type":"ContainerStarted","Data":"f9cdfa190be1846414d49988234ec14925ba15fce43dee5f9c579efe3f6be19e"} Jan 27 10:03:51 crc kubenswrapper[4869]: I0127 10:03:51.991282 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gdtvw" event={"ID":"ae0abba1-31d1-4b88-92d9-4ddf5a80a00c","Type":"ContainerStarted","Data":"e044e2898961541115113508a879572685e877abd8a3c787a1fdc77b710f0f37"} Jan 27 10:03:52 crc kubenswrapper[4869]: I0127 10:03:52.265494 4869 scope.go:117] "RemoveContainer" containerID="667b800e52b99017a2f1cdc68ffaa993d23dd5668e36b40f4de2bdee33d58e83" Jan 27 10:03:52 crc kubenswrapper[4869]: I0127 10:03:52.371239 4869 scope.go:117] "RemoveContainer" containerID="faf10e488cc5654ed22011cc18359c8077725f40fcbdc7cc37efffaed295efd0" Jan 27 10:03:56 crc kubenswrapper[4869]: I0127 10:03:56.013029 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vgctl" event={"ID":"d066acfd-4f90-4e0d-a241-03b54d7d2ca3","Type":"ContainerStarted","Data":"42a1089c666bef85963b73e303f32323790de66c3b9af902ac4de8c8a68fc599"} Jan 27 10:03:56 crc kubenswrapper[4869]: I0127 10:03:56.014782 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gdtvw" event={"ID":"ae0abba1-31d1-4b88-92d9-4ddf5a80a00c","Type":"ContainerStarted","Data":"51dbaf19f32889e3f85542997d233e9246daa6405473527044f4c3884bec1dfd"} Jan 27 10:03:56 crc kubenswrapper[4869]: I0127 10:03:56.016416 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-sbsgl" event={"ID":"8d0480d6-1706-48a9-ba8e-baa99011f330","Type":"ContainerStarted","Data":"8b5473b99da0acb71f049b6b896a97c585200ae19f40dcddae5a6c4d8a04d1b7"} Jan 27 10:03:56 crc kubenswrapper[4869]: I0127 10:03:56.016546 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-sbsgl" Jan 27 10:03:56 crc kubenswrapper[4869]: I0127 10:03:56.030526 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-vgctl" podStartSLOduration=2.219994545 podStartE2EDuration="6.030509563s" podCreationTimestamp="2026-01-27 10:03:50 +0000 UTC" firstStartedPulling="2026-01-27 10:03:51.129025094 +0000 UTC m=+599.749449187" lastFinishedPulling="2026-01-27 10:03:54.939540122 +0000 UTC m=+603.559964205" observedRunningTime="2026-01-27 10:03:56.026651957 +0000 UTC m=+604.647076050" watchObservedRunningTime="2026-01-27 10:03:56.030509563 +0000 UTC m=+604.650933646" Jan 27 10:03:56 crc kubenswrapper[4869]: I0127 10:03:56.045442 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-gdtvw" podStartSLOduration=2.283430394 podStartE2EDuration="6.045423724s" podCreationTimestamp="2026-01-27 10:03:50 +0000 UTC" firstStartedPulling="2026-01-27 10:03:51.130256322 +0000 UTC m=+599.750680395" lastFinishedPulling="2026-01-27 10:03:54.892249642 +0000 UTC m=+603.512673725" observedRunningTime="2026-01-27 10:03:56.04463229 +0000 UTC m=+604.665056373" watchObservedRunningTime="2026-01-27 10:03:56.045423724 +0000 UTC m=+604.665847817" Jan 27 10:03:56 crc kubenswrapper[4869]: I0127 10:03:56.076684 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-sbsgl" podStartSLOduration=2.1325508109999998 podStartE2EDuration="6.076662498s" podCreationTimestamp="2026-01-27 10:03:50 +0000 UTC" firstStartedPulling="2026-01-27 10:03:50.931054929 +0000 UTC m=+599.551479002" lastFinishedPulling="2026-01-27 10:03:54.875166606 +0000 UTC m=+603.495590689" observedRunningTime="2026-01-27 10:03:56.074936746 +0000 UTC m=+604.695360859" watchObservedRunningTime="2026-01-27 10:03:56.076662498 +0000 UTC m=+604.697086591" Jan 27 10:03:59 crc kubenswrapper[4869]: I0127 10:03:59.927431 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-45hzs"] Jan 27 10:03:59 crc kubenswrapper[4869]: I0127 10:03:59.928066 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovn-controller" containerID="cri-o://2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa" gracePeriod=30 Jan 27 10:03:59 crc kubenswrapper[4869]: I0127 10:03:59.928167 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="sbdb" containerID="cri-o://6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea" gracePeriod=30 Jan 27 10:03:59 crc kubenswrapper[4869]: I0127 10:03:59.928203 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="nbdb" containerID="cri-o://2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874" gracePeriod=30 Jan 27 10:03:59 crc kubenswrapper[4869]: I0127 10:03:59.928239 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="northd" containerID="cri-o://0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131" gracePeriod=30 Jan 27 10:03:59 crc kubenswrapper[4869]: I0127 10:03:59.928278 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovn-acl-logging" containerID="cri-o://c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76" gracePeriod=30 Jan 27 10:03:59 crc kubenswrapper[4869]: I0127 10:03:59.928304 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3" gracePeriod=30 Jan 27 10:03:59 crc kubenswrapper[4869]: I0127 10:03:59.928283 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="kube-rbac-proxy-node" containerID="cri-o://f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a" gracePeriod=30 Jan 27 10:03:59 crc kubenswrapper[4869]: I0127 10:03:59.973661 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovnkube-controller" containerID="cri-o://4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a" gracePeriod=30 Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.041311 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xj5gd_c4e8dfa0-1849-457a-b564-4f77e534a7e0/kube-multus/2.log" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.041770 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xj5gd_c4e8dfa0-1849-457a-b564-4f77e534a7e0/kube-multus/1.log" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.041809 4869 generic.go:334] "Generic (PLEG): container finished" podID="c4e8dfa0-1849-457a-b564-4f77e534a7e0" containerID="df9de8342d1f640ffd0f53a86a79843cfa53f0e870d9b0f7f8c5fa4f8f2b5342" exitCode=2 Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.041853 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xj5gd" event={"ID":"c4e8dfa0-1849-457a-b564-4f77e534a7e0","Type":"ContainerDied","Data":"df9de8342d1f640ffd0f53a86a79843cfa53f0e870d9b0f7f8c5fa4f8f2b5342"} Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.041890 4869 scope.go:117] "RemoveContainer" containerID="66392d6e395aa6ef33d94595eb5b6670f9205bc5591c35db295b8e29d84c7c63" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.042271 4869 scope.go:117] "RemoveContainer" containerID="df9de8342d1f640ffd0f53a86a79843cfa53f0e870d9b0f7f8c5fa4f8f2b5342" Jan 27 10:04:00 crc kubenswrapper[4869]: E0127 10:04:00.042459 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-xj5gd_openshift-multus(c4e8dfa0-1849-457a-b564-4f77e534a7e0)\"" pod="openshift-multus/multus-xj5gd" podUID="c4e8dfa0-1849-457a-b564-4f77e534a7e0" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.670013 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovnkube-controller/3.log" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.672575 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovn-acl-logging/0.log" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.673036 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovn-controller/0.log" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.673528 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.686265 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-sbsgl" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.739548 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bcjf7"] Jan 27 10:04:00 crc kubenswrapper[4869]: E0127 10:04:00.739771 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.739787 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 10:04:00 crc kubenswrapper[4869]: E0127 10:04:00.739797 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="nbdb" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.739806 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="nbdb" Jan 27 10:04:00 crc kubenswrapper[4869]: E0127 10:04:00.739813 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="northd" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.739846 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="northd" Jan 27 10:04:00 crc kubenswrapper[4869]: E0127 10:04:00.739856 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovn-acl-logging" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.739865 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovn-acl-logging" Jan 27 10:04:00 crc kubenswrapper[4869]: E0127 10:04:00.739873 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="kubecfg-setup" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.739880 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="kubecfg-setup" Jan 27 10:04:00 crc kubenswrapper[4869]: E0127 10:04:00.739898 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="sbdb" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.739904 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="sbdb" Jan 27 10:04:00 crc kubenswrapper[4869]: E0127 10:04:00.739913 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovnkube-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.739918 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovnkube-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: E0127 10:04:00.739925 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="kube-rbac-proxy-node" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.739931 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="kube-rbac-proxy-node" Jan 27 10:04:00 crc kubenswrapper[4869]: E0127 10:04:00.739940 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovnkube-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.739946 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovnkube-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: E0127 10:04:00.739953 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovnkube-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.739958 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovnkube-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: E0127 10:04:00.739967 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovn-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.739973 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovn-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: E0127 10:04:00.739983 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovnkube-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.739989 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovnkube-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: E0127 10:04:00.739996 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovnkube-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.740002 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovnkube-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.740104 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovnkube-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.740112 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="kube-rbac-proxy-node" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.740122 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovnkube-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.740130 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovn-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.740138 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="northd" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.740195 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="nbdb" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.740204 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.740212 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovn-acl-logging" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.740221 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="sbdb" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.740227 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovnkube-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.740406 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovnkube-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.740416 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerName="ovnkube-controller" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.742573 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.827805 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl2nv\" (UniqueName: \"kubernetes.io/projected/8d38c693-da40-464a-9822-f98fb1b5ca35-kube-api-access-zl2nv\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.827869 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-systemd\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.827890 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-log-socket\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.827915 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-openvswitch\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.827939 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-env-overrides\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.827966 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-ovnkube-script-lib\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.827980 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-slash\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.828017 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-ovn\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.828018 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-log-socket" (OuterVolumeSpecName: "log-socket") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.828055 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.828188 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d38c693-da40-464a-9822-f98fb1b5ca35-ovn-node-metrics-cert\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.828433 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.828462 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-slash" (OuterVolumeSpecName: "host-slash") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.828484 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.828520 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.828529 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-cni-bin\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.828552 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.828793 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-ovnkube-config\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.828871 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-kubelet\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.828893 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-var-lib-openvswitch\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.828925 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-cni-netd\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.828969 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-systemd-units\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829059 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-etc-openvswitch\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829117 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-run-netns\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829158 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-run-ovn-kubernetes\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829182 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-node-log\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829186 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829202 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8d38c693-da40-464a-9822-f98fb1b5ca35\" (UID: \"8d38c693-da40-464a-9822-f98fb1b5ca35\") " Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829479 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829574 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829583 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829503 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829593 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-systemd-units\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829534 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-node-log" (OuterVolumeSpecName: "node-log") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829548 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829561 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829554 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829673 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829692 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-etc-openvswitch\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829727 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9hpw\" (UniqueName: \"kubernetes.io/projected/124c47df-f98b-41ee-a9da-128eb2a8fe99-kube-api-access-s9hpw\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829764 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-slash\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829808 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-cni-bin\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829846 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-kubelet\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829893 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/124c47df-f98b-41ee-a9da-128eb2a8fe99-env-overrides\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829920 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-var-lib-openvswitch\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.829972 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-run-netns\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830019 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/124c47df-f98b-41ee-a9da-128eb2a8fe99-ovn-node-metrics-cert\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830051 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830075 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-run-ovn-kubernetes\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830125 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/124c47df-f98b-41ee-a9da-128eb2a8fe99-ovnkube-config\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830154 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-run-systemd\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830186 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-node-log\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830209 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-cni-netd\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830233 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-run-openvswitch\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830253 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/124c47df-f98b-41ee-a9da-128eb2a8fe99-ovnkube-script-lib\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830294 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-run-ovn\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830335 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-log-socket\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830385 4869 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830397 4869 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830408 4869 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830418 4869 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830430 4869 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830440 4869 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830452 4869 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830463 4869 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830474 4869 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830485 4869 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830496 4869 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830507 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830518 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830529 4869 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830538 4869 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830548 4869 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.830559 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d38c693-da40-464a-9822-f98fb1b5ca35-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.832644 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d38c693-da40-464a-9822-f98fb1b5ca35-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.832711 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d38c693-da40-464a-9822-f98fb1b5ca35-kube-api-access-zl2nv" (OuterVolumeSpecName: "kube-api-access-zl2nv") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "kube-api-access-zl2nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.839725 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8d38c693-da40-464a-9822-f98fb1b5ca35" (UID: "8d38c693-da40-464a-9822-f98fb1b5ca35"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.931985 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-log-socket\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932054 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-systemd-units\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932085 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-etc-openvswitch\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932112 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9hpw\" (UniqueName: \"kubernetes.io/projected/124c47df-f98b-41ee-a9da-128eb2a8fe99-kube-api-access-s9hpw\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932142 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-slash\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932162 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-cni-bin\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932176 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-kubelet\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932215 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/124c47df-f98b-41ee-a9da-128eb2a8fe99-env-overrides\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932234 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-var-lib-openvswitch\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932250 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-run-netns\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932271 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/124c47df-f98b-41ee-a9da-128eb2a8fe99-ovn-node-metrics-cert\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932287 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932303 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-run-ovn-kubernetes\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932319 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-run-systemd\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932332 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/124c47df-f98b-41ee-a9da-128eb2a8fe99-ovnkube-config\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932350 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-node-log\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932366 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-cni-netd\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932382 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-run-openvswitch\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932399 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/124c47df-f98b-41ee-a9da-128eb2a8fe99-ovnkube-script-lib\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932422 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-run-ovn\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932468 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl2nv\" (UniqueName: \"kubernetes.io/projected/8d38c693-da40-464a-9822-f98fb1b5ca35-kube-api-access-zl2nv\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932478 4869 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d38c693-da40-464a-9822-f98fb1b5ca35-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932487 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d38c693-da40-464a-9822-f98fb1b5ca35-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932527 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-run-ovn\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932567 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-log-socket\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932587 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-systemd-units\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-etc-openvswitch\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932868 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-slash\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932892 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-cni-bin\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.932913 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-kubelet\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.933706 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/124c47df-f98b-41ee-a9da-128eb2a8fe99-env-overrides\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.933738 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-var-lib-openvswitch\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.933758 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-run-netns\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.935128 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/124c47df-f98b-41ee-a9da-128eb2a8fe99-ovnkube-config\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.935241 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.935295 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-run-ovn-kubernetes\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.935348 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-run-systemd\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.935396 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-host-cni-netd\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.935441 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-node-log\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.935486 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/124c47df-f98b-41ee-a9da-128eb2a8fe99-run-openvswitch\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.936405 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/124c47df-f98b-41ee-a9da-128eb2a8fe99-ovnkube-script-lib\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.940746 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/124c47df-f98b-41ee-a9da-128eb2a8fe99-ovn-node-metrics-cert\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:00 crc kubenswrapper[4869]: I0127 10:04:00.962468 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9hpw\" (UniqueName: \"kubernetes.io/projected/124c47df-f98b-41ee-a9da-128eb2a8fe99-kube-api-access-s9hpw\") pod \"ovnkube-node-bcjf7\" (UID: \"124c47df-f98b-41ee-a9da-128eb2a8fe99\") " pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.047686 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovnkube-controller/3.log" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.049638 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovn-acl-logging/0.log" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.050464 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-45hzs_8d38c693-da40-464a-9822-f98fb1b5ca35/ovn-controller/0.log" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.050839 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerID="4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a" exitCode=0 Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.050859 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerID="6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea" exitCode=0 Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.050868 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerID="2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874" exitCode=0 Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.050866 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerDied","Data":"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.050912 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerDied","Data":"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.050929 4869 scope.go:117] "RemoveContainer" containerID="4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.050876 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerID="0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131" exitCode=0 Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051013 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerID="175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3" exitCode=0 Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051029 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerID="f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a" exitCode=0 Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051040 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerID="c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76" exitCode=143 Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051048 4869 generic.go:334] "Generic (PLEG): container finished" podID="8d38c693-da40-464a-9822-f98fb1b5ca35" containerID="2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa" exitCode=143 Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.050915 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.050930 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerDied","Data":"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051103 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerDied","Data":"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051118 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerDied","Data":"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051128 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerDied","Data":"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051139 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051148 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051153 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051159 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051164 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051169 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051174 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051180 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051185 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051192 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerDied","Data":"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051199 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051205 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051211 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051216 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051222 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051227 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051232 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051237 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051242 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051247 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051255 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerDied","Data":"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051262 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051268 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051274 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051280 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051285 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051291 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051296 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051301 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051306 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051311 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051318 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-45hzs" event={"ID":"8d38c693-da40-464a-9822-f98fb1b5ca35","Type":"ContainerDied","Data":"6f55aafb7ee0fb8e804bfbc2e3ef4d7925851605ccbe20a28f745ed1365db41e"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051325 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051330 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051335 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051340 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051344 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051349 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051354 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051359 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051363 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.051368 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8"} Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.054203 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xj5gd_c4e8dfa0-1849-457a-b564-4f77e534a7e0/kube-multus/2.log" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.054926 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.073998 4869 scope.go:117] "RemoveContainer" containerID="2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.080053 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-45hzs"] Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.085506 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-45hzs"] Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.133773 4869 scope.go:117] "RemoveContainer" containerID="6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.161948 4869 scope.go:117] "RemoveContainer" containerID="2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.174690 4869 scope.go:117] "RemoveContainer" containerID="0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.186669 4869 scope.go:117] "RemoveContainer" containerID="175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.199598 4869 scope.go:117] "RemoveContainer" containerID="f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.213733 4869 scope.go:117] "RemoveContainer" containerID="c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.265055 4869 scope.go:117] "RemoveContainer" containerID="2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.279187 4869 scope.go:117] "RemoveContainer" containerID="2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.291120 4869 scope.go:117] "RemoveContainer" containerID="4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a" Jan 27 10:04:01 crc kubenswrapper[4869]: E0127 10:04:01.291767 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a\": container with ID starting with 4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a not found: ID does not exist" containerID="4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.291807 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a"} err="failed to get container status \"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a\": rpc error: code = NotFound desc = could not find container \"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a\": container with ID starting with 4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.291851 4869 scope.go:117] "RemoveContainer" containerID="2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690" Jan 27 10:04:01 crc kubenswrapper[4869]: E0127 10:04:01.292220 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\": container with ID starting with 2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690 not found: ID does not exist" containerID="2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.292276 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690"} err="failed to get container status \"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\": rpc error: code = NotFound desc = could not find container \"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\": container with ID starting with 2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.292314 4869 scope.go:117] "RemoveContainer" containerID="6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea" Jan 27 10:04:01 crc kubenswrapper[4869]: E0127 10:04:01.292601 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\": container with ID starting with 6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea not found: ID does not exist" containerID="6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.292628 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea"} err="failed to get container status \"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\": rpc error: code = NotFound desc = could not find container \"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\": container with ID starting with 6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.292645 4869 scope.go:117] "RemoveContainer" containerID="2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874" Jan 27 10:04:01 crc kubenswrapper[4869]: E0127 10:04:01.293023 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\": container with ID starting with 2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874 not found: ID does not exist" containerID="2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.293046 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874"} err="failed to get container status \"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\": rpc error: code = NotFound desc = could not find container \"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\": container with ID starting with 2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.293064 4869 scope.go:117] "RemoveContainer" containerID="0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131" Jan 27 10:04:01 crc kubenswrapper[4869]: E0127 10:04:01.293448 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\": container with ID starting with 0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131 not found: ID does not exist" containerID="0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.293500 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131"} err="failed to get container status \"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\": rpc error: code = NotFound desc = could not find container \"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\": container with ID starting with 0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.293527 4869 scope.go:117] "RemoveContainer" containerID="175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3" Jan 27 10:04:01 crc kubenswrapper[4869]: E0127 10:04:01.294067 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\": container with ID starting with 175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3 not found: ID does not exist" containerID="175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.294105 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3"} err="failed to get container status \"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\": rpc error: code = NotFound desc = could not find container \"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\": container with ID starting with 175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.294136 4869 scope.go:117] "RemoveContainer" containerID="f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a" Jan 27 10:04:01 crc kubenswrapper[4869]: E0127 10:04:01.294594 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\": container with ID starting with f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a not found: ID does not exist" containerID="f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.294634 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a"} err="failed to get container status \"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\": rpc error: code = NotFound desc = could not find container \"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\": container with ID starting with f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.294659 4869 scope.go:117] "RemoveContainer" containerID="c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76" Jan 27 10:04:01 crc kubenswrapper[4869]: E0127 10:04:01.295128 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\": container with ID starting with c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76 not found: ID does not exist" containerID="c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.295165 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76"} err="failed to get container status \"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\": rpc error: code = NotFound desc = could not find container \"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\": container with ID starting with c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.295193 4869 scope.go:117] "RemoveContainer" containerID="2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa" Jan 27 10:04:01 crc kubenswrapper[4869]: E0127 10:04:01.295634 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\": container with ID starting with 2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa not found: ID does not exist" containerID="2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.295673 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa"} err="failed to get container status \"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\": rpc error: code = NotFound desc = could not find container \"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\": container with ID starting with 2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.295698 4869 scope.go:117] "RemoveContainer" containerID="2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8" Jan 27 10:04:01 crc kubenswrapper[4869]: E0127 10:04:01.296142 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\": container with ID starting with 2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8 not found: ID does not exist" containerID="2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.296181 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8"} err="failed to get container status \"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\": rpc error: code = NotFound desc = could not find container \"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\": container with ID starting with 2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.296228 4869 scope.go:117] "RemoveContainer" containerID="4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.296600 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a"} err="failed to get container status \"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a\": rpc error: code = NotFound desc = could not find container \"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a\": container with ID starting with 4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.296633 4869 scope.go:117] "RemoveContainer" containerID="2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.297012 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690"} err="failed to get container status \"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\": rpc error: code = NotFound desc = could not find container \"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\": container with ID starting with 2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.297048 4869 scope.go:117] "RemoveContainer" containerID="6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.297335 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea"} err="failed to get container status \"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\": rpc error: code = NotFound desc = could not find container \"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\": container with ID starting with 6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.297362 4869 scope.go:117] "RemoveContainer" containerID="2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.297787 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874"} err="failed to get container status \"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\": rpc error: code = NotFound desc = could not find container \"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\": container with ID starting with 2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.297816 4869 scope.go:117] "RemoveContainer" containerID="0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.298134 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131"} err="failed to get container status \"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\": rpc error: code = NotFound desc = could not find container \"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\": container with ID starting with 0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.298156 4869 scope.go:117] "RemoveContainer" containerID="175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.298578 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3"} err="failed to get container status \"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\": rpc error: code = NotFound desc = could not find container \"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\": container with ID starting with 175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.298597 4869 scope.go:117] "RemoveContainer" containerID="f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.299011 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a"} err="failed to get container status \"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\": rpc error: code = NotFound desc = could not find container \"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\": container with ID starting with f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.299060 4869 scope.go:117] "RemoveContainer" containerID="c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.299371 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76"} err="failed to get container status \"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\": rpc error: code = NotFound desc = could not find container \"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\": container with ID starting with c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.299398 4869 scope.go:117] "RemoveContainer" containerID="2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.299701 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa"} err="failed to get container status \"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\": rpc error: code = NotFound desc = could not find container \"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\": container with ID starting with 2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.299722 4869 scope.go:117] "RemoveContainer" containerID="2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.300169 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8"} err="failed to get container status \"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\": rpc error: code = NotFound desc = could not find container \"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\": container with ID starting with 2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.300215 4869 scope.go:117] "RemoveContainer" containerID="4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.300567 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a"} err="failed to get container status \"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a\": rpc error: code = NotFound desc = could not find container \"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a\": container with ID starting with 4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.300611 4869 scope.go:117] "RemoveContainer" containerID="2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.300927 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690"} err="failed to get container status \"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\": rpc error: code = NotFound desc = could not find container \"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\": container with ID starting with 2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.300961 4869 scope.go:117] "RemoveContainer" containerID="6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.301385 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea"} err="failed to get container status \"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\": rpc error: code = NotFound desc = could not find container \"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\": container with ID starting with 6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.301416 4869 scope.go:117] "RemoveContainer" containerID="2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.301717 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874"} err="failed to get container status \"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\": rpc error: code = NotFound desc = could not find container \"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\": container with ID starting with 2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.301741 4869 scope.go:117] "RemoveContainer" containerID="0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.302060 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131"} err="failed to get container status \"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\": rpc error: code = NotFound desc = could not find container \"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\": container with ID starting with 0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.302082 4869 scope.go:117] "RemoveContainer" containerID="175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.302315 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3"} err="failed to get container status \"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\": rpc error: code = NotFound desc = could not find container \"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\": container with ID starting with 175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.302336 4869 scope.go:117] "RemoveContainer" containerID="f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.302575 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a"} err="failed to get container status \"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\": rpc error: code = NotFound desc = could not find container \"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\": container with ID starting with f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.302605 4869 scope.go:117] "RemoveContainer" containerID="c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.302884 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76"} err="failed to get container status \"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\": rpc error: code = NotFound desc = could not find container \"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\": container with ID starting with c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.302908 4869 scope.go:117] "RemoveContainer" containerID="2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.303138 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa"} err="failed to get container status \"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\": rpc error: code = NotFound desc = could not find container \"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\": container with ID starting with 2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.303167 4869 scope.go:117] "RemoveContainer" containerID="2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.303422 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8"} err="failed to get container status \"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\": rpc error: code = NotFound desc = could not find container \"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\": container with ID starting with 2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.303442 4869 scope.go:117] "RemoveContainer" containerID="4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.303681 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a"} err="failed to get container status \"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a\": rpc error: code = NotFound desc = could not find container \"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a\": container with ID starting with 4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.303700 4869 scope.go:117] "RemoveContainer" containerID="2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.304121 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690"} err="failed to get container status \"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\": rpc error: code = NotFound desc = could not find container \"2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690\": container with ID starting with 2ae67d8917edd39a4b61e8df9ebd1021e564ccc88dbbfca56181bb2968f82690 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.304142 4869 scope.go:117] "RemoveContainer" containerID="6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.304345 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea"} err="failed to get container status \"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\": rpc error: code = NotFound desc = could not find container \"6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea\": container with ID starting with 6330c7a11c40e40d8a71f547e4218a8e699b387645aaf06cd8646c4e22c19aea not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.304360 4869 scope.go:117] "RemoveContainer" containerID="2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.304572 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874"} err="failed to get container status \"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\": rpc error: code = NotFound desc = could not find container \"2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874\": container with ID starting with 2689867f7a2c7019f1da50ef53ba429a3485dc339123c2b96927784f9c50a874 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.304589 4869 scope.go:117] "RemoveContainer" containerID="0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.304779 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131"} err="failed to get container status \"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\": rpc error: code = NotFound desc = could not find container \"0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131\": container with ID starting with 0ab86794b6dc74f860528bad1a2b2f131c8c58207aab984a4afe1090e2cdd131 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.304793 4869 scope.go:117] "RemoveContainer" containerID="175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.305141 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3"} err="failed to get container status \"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\": rpc error: code = NotFound desc = could not find container \"175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3\": container with ID starting with 175a7292620fca54e20d64d700f3cdf18adb467e94d92d60be8a3c8f43a107f3 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.305185 4869 scope.go:117] "RemoveContainer" containerID="f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.307957 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a"} err="failed to get container status \"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\": rpc error: code = NotFound desc = could not find container \"f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a\": container with ID starting with f3542677b6cd7c505394a41349e8086334d409c786ce29bd90d578800cfbe17a not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.307996 4869 scope.go:117] "RemoveContainer" containerID="c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.308270 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76"} err="failed to get container status \"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\": rpc error: code = NotFound desc = could not find container \"c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76\": container with ID starting with c2f2ffa311c27dc5c2a5ff9d693073f8b90a836642bddaf95e49f2d387149d76 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.308295 4869 scope.go:117] "RemoveContainer" containerID="2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.311282 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa"} err="failed to get container status \"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\": rpc error: code = NotFound desc = could not find container \"2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa\": container with ID starting with 2ef22578088021b9ec5e59592cf5c5cf1cc47eb4fefd13e71053e7ba9d1650fa not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.311368 4869 scope.go:117] "RemoveContainer" containerID="2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.312397 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8"} err="failed to get container status \"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\": rpc error: code = NotFound desc = could not find container \"2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8\": container with ID starting with 2806eeb859178017665a2a469182ad65cc583e3917bf395ebb00099bfdbdb0f8 not found: ID does not exist" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.312464 4869 scope.go:117] "RemoveContainer" containerID="4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a" Jan 27 10:04:01 crc kubenswrapper[4869]: I0127 10:04:01.312921 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a"} err="failed to get container status \"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a\": rpc error: code = NotFound desc = could not find container \"4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a\": container with ID starting with 4673b695df75a14d188660c7b5d7f91398992bada02fb56a3eb253e6e190d31a not found: ID does not exist" Jan 27 10:04:02 crc kubenswrapper[4869]: I0127 10:04:02.039318 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d38c693-da40-464a-9822-f98fb1b5ca35" path="/var/lib/kubelet/pods/8d38c693-da40-464a-9822-f98fb1b5ca35/volumes" Jan 27 10:04:02 crc kubenswrapper[4869]: I0127 10:04:02.060803 4869 generic.go:334] "Generic (PLEG): container finished" podID="124c47df-f98b-41ee-a9da-128eb2a8fe99" containerID="827eabe90fc11d7274bb2e37e9527719e9ada7ebe299bca9fd573a5a0f54ba35" exitCode=0 Jan 27 10:04:02 crc kubenswrapper[4869]: I0127 10:04:02.060894 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" event={"ID":"124c47df-f98b-41ee-a9da-128eb2a8fe99","Type":"ContainerDied","Data":"827eabe90fc11d7274bb2e37e9527719e9ada7ebe299bca9fd573a5a0f54ba35"} Jan 27 10:04:02 crc kubenswrapper[4869]: I0127 10:04:02.060928 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" event={"ID":"124c47df-f98b-41ee-a9da-128eb2a8fe99","Type":"ContainerStarted","Data":"4fa6c81874a49d3ad93be2a93338e5827fef1ba1aa1a8e55a863b2f4b2f27a28"} Jan 27 10:04:03 crc kubenswrapper[4869]: I0127 10:04:03.075452 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" event={"ID":"124c47df-f98b-41ee-a9da-128eb2a8fe99","Type":"ContainerStarted","Data":"bcb551f0b58b6dee2d870e3463d2816c3877734e0e2f65164a45c85bb1d39a71"} Jan 27 10:04:03 crc kubenswrapper[4869]: I0127 10:04:03.076095 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" event={"ID":"124c47df-f98b-41ee-a9da-128eb2a8fe99","Type":"ContainerStarted","Data":"9782982e9cd3cd43823966d8600ad744f17932ce3b864e3fdec662ee47de5fae"} Jan 27 10:04:03 crc kubenswrapper[4869]: I0127 10:04:03.076119 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" event={"ID":"124c47df-f98b-41ee-a9da-128eb2a8fe99","Type":"ContainerStarted","Data":"3ff17b8d4b00364b10b8eb23c32e9c0edcc6ab17eaacf0aff78e778491a3db1d"} Jan 27 10:04:03 crc kubenswrapper[4869]: I0127 10:04:03.076137 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" event={"ID":"124c47df-f98b-41ee-a9da-128eb2a8fe99","Type":"ContainerStarted","Data":"d050e527b812cc40250de0787fee38979852768f36636fd50ea48a602663c887"} Jan 27 10:04:03 crc kubenswrapper[4869]: I0127 10:04:03.076156 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" event={"ID":"124c47df-f98b-41ee-a9da-128eb2a8fe99","Type":"ContainerStarted","Data":"b09b461092babe8e8387701728021c2eff76949dcca699517728300121514966"} Jan 27 10:04:03 crc kubenswrapper[4869]: I0127 10:04:03.076172 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" event={"ID":"124c47df-f98b-41ee-a9da-128eb2a8fe99","Type":"ContainerStarted","Data":"68193d36dd70c8913464eba8aaf863ca5e6c7f482ca0c05be1d9889bcecfc177"} Jan 27 10:04:05 crc kubenswrapper[4869]: I0127 10:04:05.089780 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" event={"ID":"124c47df-f98b-41ee-a9da-128eb2a8fe99","Type":"ContainerStarted","Data":"4392f810bfcb01b0b1afb7131b77bb0a55b86e30d74eeac7db038793c9ec7f06"} Jan 27 10:04:08 crc kubenswrapper[4869]: I0127 10:04:08.112062 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" event={"ID":"124c47df-f98b-41ee-a9da-128eb2a8fe99","Type":"ContainerStarted","Data":"5c205e3ff5c5d33fdfa0628fd7b319f68d5ae62ce560001f6ffef710a0c72ab9"} Jan 27 10:04:08 crc kubenswrapper[4869]: I0127 10:04:08.113913 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:08 crc kubenswrapper[4869]: I0127 10:04:08.113956 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:08 crc kubenswrapper[4869]: I0127 10:04:08.114235 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:08 crc kubenswrapper[4869]: I0127 10:04:08.139732 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:08 crc kubenswrapper[4869]: I0127 10:04:08.143540 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:08 crc kubenswrapper[4869]: I0127 10:04:08.154884 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" podStartSLOduration=8.154865872 podStartE2EDuration="8.154865872s" podCreationTimestamp="2026-01-27 10:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:04:08.147612822 +0000 UTC m=+616.768036915" watchObservedRunningTime="2026-01-27 10:04:08.154865872 +0000 UTC m=+616.775289955" Jan 27 10:04:11 crc kubenswrapper[4869]: I0127 10:04:11.033738 4869 scope.go:117] "RemoveContainer" containerID="df9de8342d1f640ffd0f53a86a79843cfa53f0e870d9b0f7f8c5fa4f8f2b5342" Jan 27 10:04:11 crc kubenswrapper[4869]: E0127 10:04:11.035339 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-xj5gd_openshift-multus(c4e8dfa0-1849-457a-b564-4f77e534a7e0)\"" pod="openshift-multus/multus-xj5gd" podUID="c4e8dfa0-1849-457a-b564-4f77e534a7e0" Jan 27 10:04:23 crc kubenswrapper[4869]: I0127 10:04:23.033577 4869 scope.go:117] "RemoveContainer" containerID="df9de8342d1f640ffd0f53a86a79843cfa53f0e870d9b0f7f8c5fa4f8f2b5342" Jan 27 10:04:24 crc kubenswrapper[4869]: I0127 10:04:24.203202 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xj5gd_c4e8dfa0-1849-457a-b564-4f77e534a7e0/kube-multus/2.log" Jan 27 10:04:24 crc kubenswrapper[4869]: I0127 10:04:24.203466 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xj5gd" event={"ID":"c4e8dfa0-1849-457a-b564-4f77e534a7e0","Type":"ContainerStarted","Data":"c6b8a38bc13e4b56890b1d297f8ec7f471701ad54f228a044856483721580daa"} Jan 27 10:04:31 crc kubenswrapper[4869]: I0127 10:04:31.080444 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bcjf7" Jan 27 10:04:43 crc kubenswrapper[4869]: I0127 10:04:43.715502 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l"] Jan 27 10:04:43 crc kubenswrapper[4869]: I0127 10:04:43.717646 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" Jan 27 10:04:43 crc kubenswrapper[4869]: I0127 10:04:43.721304 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 10:04:43 crc kubenswrapper[4869]: I0127 10:04:43.724257 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l"] Jan 27 10:04:43 crc kubenswrapper[4869]: I0127 10:04:43.883090 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmmp6\" (UniqueName: \"kubernetes.io/projected/4456d111-1f5f-4ca1-bebd-88fb3faa3033-kube-api-access-cmmp6\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l\" (UID: \"4456d111-1f5f-4ca1-bebd-88fb3faa3033\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" Jan 27 10:04:43 crc kubenswrapper[4869]: I0127 10:04:43.883176 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4456d111-1f5f-4ca1-bebd-88fb3faa3033-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l\" (UID: \"4456d111-1f5f-4ca1-bebd-88fb3faa3033\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" Jan 27 10:04:43 crc kubenswrapper[4869]: I0127 10:04:43.883267 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4456d111-1f5f-4ca1-bebd-88fb3faa3033-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l\" (UID: \"4456d111-1f5f-4ca1-bebd-88fb3faa3033\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" Jan 27 10:04:43 crc kubenswrapper[4869]: I0127 10:04:43.984136 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmmp6\" (UniqueName: \"kubernetes.io/projected/4456d111-1f5f-4ca1-bebd-88fb3faa3033-kube-api-access-cmmp6\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l\" (UID: \"4456d111-1f5f-4ca1-bebd-88fb3faa3033\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" Jan 27 10:04:43 crc kubenswrapper[4869]: I0127 10:04:43.984176 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4456d111-1f5f-4ca1-bebd-88fb3faa3033-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l\" (UID: \"4456d111-1f5f-4ca1-bebd-88fb3faa3033\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" Jan 27 10:04:43 crc kubenswrapper[4869]: I0127 10:04:43.984196 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4456d111-1f5f-4ca1-bebd-88fb3faa3033-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l\" (UID: \"4456d111-1f5f-4ca1-bebd-88fb3faa3033\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" Jan 27 10:04:43 crc kubenswrapper[4869]: I0127 10:04:43.984653 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4456d111-1f5f-4ca1-bebd-88fb3faa3033-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l\" (UID: \"4456d111-1f5f-4ca1-bebd-88fb3faa3033\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" Jan 27 10:04:43 crc kubenswrapper[4869]: I0127 10:04:43.984727 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4456d111-1f5f-4ca1-bebd-88fb3faa3033-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l\" (UID: \"4456d111-1f5f-4ca1-bebd-88fb3faa3033\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" Jan 27 10:04:44 crc kubenswrapper[4869]: I0127 10:04:44.011594 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmmp6\" (UniqueName: \"kubernetes.io/projected/4456d111-1f5f-4ca1-bebd-88fb3faa3033-kube-api-access-cmmp6\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l\" (UID: \"4456d111-1f5f-4ca1-bebd-88fb3faa3033\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" Jan 27 10:04:44 crc kubenswrapper[4869]: I0127 10:04:44.035618 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" Jan 27 10:04:44 crc kubenswrapper[4869]: I0127 10:04:44.241091 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l"] Jan 27 10:04:44 crc kubenswrapper[4869]: I0127 10:04:44.309052 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" event={"ID":"4456d111-1f5f-4ca1-bebd-88fb3faa3033","Type":"ContainerStarted","Data":"6c47d42757e781657f9eeb6d597cfda460831f8d09988b2861958e24cde5c708"} Jan 27 10:04:45 crc kubenswrapper[4869]: I0127 10:04:45.316194 4869 generic.go:334] "Generic (PLEG): container finished" podID="4456d111-1f5f-4ca1-bebd-88fb3faa3033" containerID="f65a001567894b98cf69d0eb2dbeeeeb371a1a867f9edd7e879efa68ac0bb2d3" exitCode=0 Jan 27 10:04:45 crc kubenswrapper[4869]: I0127 10:04:45.316275 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" event={"ID":"4456d111-1f5f-4ca1-bebd-88fb3faa3033","Type":"ContainerDied","Data":"f65a001567894b98cf69d0eb2dbeeeeb371a1a867f9edd7e879efa68ac0bb2d3"} Jan 27 10:04:47 crc kubenswrapper[4869]: I0127 10:04:47.327456 4869 generic.go:334] "Generic (PLEG): container finished" podID="4456d111-1f5f-4ca1-bebd-88fb3faa3033" containerID="63220a55c30d797a312d4ec54fd6965ae3e60ca6d7d23de795153da89ccb3134" exitCode=0 Jan 27 10:04:47 crc kubenswrapper[4869]: I0127 10:04:47.327550 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" event={"ID":"4456d111-1f5f-4ca1-bebd-88fb3faa3033","Type":"ContainerDied","Data":"63220a55c30d797a312d4ec54fd6965ae3e60ca6d7d23de795153da89ccb3134"} Jan 27 10:04:48 crc kubenswrapper[4869]: I0127 10:04:48.337763 4869 generic.go:334] "Generic (PLEG): container finished" podID="4456d111-1f5f-4ca1-bebd-88fb3faa3033" containerID="ad980d5ad60a4d73c5ba856102a53cb2419fff6cce08684d95adf95e2955100c" exitCode=0 Jan 27 10:04:48 crc kubenswrapper[4869]: I0127 10:04:48.337825 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" event={"ID":"4456d111-1f5f-4ca1-bebd-88fb3faa3033","Type":"ContainerDied","Data":"ad980d5ad60a4d73c5ba856102a53cb2419fff6cce08684d95adf95e2955100c"} Jan 27 10:04:49 crc kubenswrapper[4869]: I0127 10:04:49.604896 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" Jan 27 10:04:49 crc kubenswrapper[4869]: I0127 10:04:49.760721 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmmp6\" (UniqueName: \"kubernetes.io/projected/4456d111-1f5f-4ca1-bebd-88fb3faa3033-kube-api-access-cmmp6\") pod \"4456d111-1f5f-4ca1-bebd-88fb3faa3033\" (UID: \"4456d111-1f5f-4ca1-bebd-88fb3faa3033\") " Jan 27 10:04:49 crc kubenswrapper[4869]: I0127 10:04:49.760898 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4456d111-1f5f-4ca1-bebd-88fb3faa3033-util\") pod \"4456d111-1f5f-4ca1-bebd-88fb3faa3033\" (UID: \"4456d111-1f5f-4ca1-bebd-88fb3faa3033\") " Jan 27 10:04:49 crc kubenswrapper[4869]: I0127 10:04:49.761039 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4456d111-1f5f-4ca1-bebd-88fb3faa3033-bundle\") pod \"4456d111-1f5f-4ca1-bebd-88fb3faa3033\" (UID: \"4456d111-1f5f-4ca1-bebd-88fb3faa3033\") " Jan 27 10:04:49 crc kubenswrapper[4869]: I0127 10:04:49.762111 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4456d111-1f5f-4ca1-bebd-88fb3faa3033-bundle" (OuterVolumeSpecName: "bundle") pod "4456d111-1f5f-4ca1-bebd-88fb3faa3033" (UID: "4456d111-1f5f-4ca1-bebd-88fb3faa3033"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:04:49 crc kubenswrapper[4869]: I0127 10:04:49.770265 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4456d111-1f5f-4ca1-bebd-88fb3faa3033-kube-api-access-cmmp6" (OuterVolumeSpecName: "kube-api-access-cmmp6") pod "4456d111-1f5f-4ca1-bebd-88fb3faa3033" (UID: "4456d111-1f5f-4ca1-bebd-88fb3faa3033"). InnerVolumeSpecName "kube-api-access-cmmp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:04:49 crc kubenswrapper[4869]: I0127 10:04:49.782249 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4456d111-1f5f-4ca1-bebd-88fb3faa3033-util" (OuterVolumeSpecName: "util") pod "4456d111-1f5f-4ca1-bebd-88fb3faa3033" (UID: "4456d111-1f5f-4ca1-bebd-88fb3faa3033"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:04:49 crc kubenswrapper[4869]: I0127 10:04:49.863062 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4456d111-1f5f-4ca1-bebd-88fb3faa3033-util\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:49 crc kubenswrapper[4869]: I0127 10:04:49.863124 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4456d111-1f5f-4ca1-bebd-88fb3faa3033-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:49 crc kubenswrapper[4869]: I0127 10:04:49.863220 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmmp6\" (UniqueName: \"kubernetes.io/projected/4456d111-1f5f-4ca1-bebd-88fb3faa3033-kube-api-access-cmmp6\") on node \"crc\" DevicePath \"\"" Jan 27 10:04:50 crc kubenswrapper[4869]: I0127 10:04:50.349476 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" event={"ID":"4456d111-1f5f-4ca1-bebd-88fb3faa3033","Type":"ContainerDied","Data":"6c47d42757e781657f9eeb6d597cfda460831f8d09988b2861958e24cde5c708"} Jan 27 10:04:50 crc kubenswrapper[4869]: I0127 10:04:50.349751 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c47d42757e781657f9eeb6d597cfda460831f8d09988b2861958e24cde5c708" Jan 27 10:04:50 crc kubenswrapper[4869]: I0127 10:04:50.349541 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l" Jan 27 10:04:52 crc kubenswrapper[4869]: I0127 10:04:52.395007 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pv7vd"] Jan 27 10:04:52 crc kubenswrapper[4869]: E0127 10:04:52.395537 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4456d111-1f5f-4ca1-bebd-88fb3faa3033" containerName="extract" Jan 27 10:04:52 crc kubenswrapper[4869]: I0127 10:04:52.395552 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4456d111-1f5f-4ca1-bebd-88fb3faa3033" containerName="extract" Jan 27 10:04:52 crc kubenswrapper[4869]: E0127 10:04:52.395574 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4456d111-1f5f-4ca1-bebd-88fb3faa3033" containerName="pull" Jan 27 10:04:52 crc kubenswrapper[4869]: I0127 10:04:52.395583 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4456d111-1f5f-4ca1-bebd-88fb3faa3033" containerName="pull" Jan 27 10:04:52 crc kubenswrapper[4869]: E0127 10:04:52.395596 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4456d111-1f5f-4ca1-bebd-88fb3faa3033" containerName="util" Jan 27 10:04:52 crc kubenswrapper[4869]: I0127 10:04:52.395604 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4456d111-1f5f-4ca1-bebd-88fb3faa3033" containerName="util" Jan 27 10:04:52 crc kubenswrapper[4869]: I0127 10:04:52.395731 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4456d111-1f5f-4ca1-bebd-88fb3faa3033" containerName="extract" Jan 27 10:04:52 crc kubenswrapper[4869]: I0127 10:04:52.396227 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-pv7vd" Jan 27 10:04:52 crc kubenswrapper[4869]: I0127 10:04:52.399407 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 10:04:52 crc kubenswrapper[4869]: I0127 10:04:52.399559 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-smqmz" Jan 27 10:04:52 crc kubenswrapper[4869]: I0127 10:04:52.399751 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 10:04:52 crc kubenswrapper[4869]: I0127 10:04:52.406394 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pv7vd"] Jan 27 10:04:52 crc kubenswrapper[4869]: I0127 10:04:52.492771 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9q25\" (UniqueName: \"kubernetes.io/projected/427bab1b-d1a1-4106-8d8b-a6e34368576b-kube-api-access-v9q25\") pod \"nmstate-operator-646758c888-pv7vd\" (UID: \"427bab1b-d1a1-4106-8d8b-a6e34368576b\") " pod="openshift-nmstate/nmstate-operator-646758c888-pv7vd" Jan 27 10:04:52 crc kubenswrapper[4869]: I0127 10:04:52.594882 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9q25\" (UniqueName: \"kubernetes.io/projected/427bab1b-d1a1-4106-8d8b-a6e34368576b-kube-api-access-v9q25\") pod \"nmstate-operator-646758c888-pv7vd\" (UID: \"427bab1b-d1a1-4106-8d8b-a6e34368576b\") " pod="openshift-nmstate/nmstate-operator-646758c888-pv7vd" Jan 27 10:04:52 crc kubenswrapper[4869]: I0127 10:04:52.623658 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9q25\" (UniqueName: \"kubernetes.io/projected/427bab1b-d1a1-4106-8d8b-a6e34368576b-kube-api-access-v9q25\") pod \"nmstate-operator-646758c888-pv7vd\" (UID: \"427bab1b-d1a1-4106-8d8b-a6e34368576b\") " pod="openshift-nmstate/nmstate-operator-646758c888-pv7vd" Jan 27 10:04:52 crc kubenswrapper[4869]: I0127 10:04:52.712881 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-pv7vd" Jan 27 10:04:52 crc kubenswrapper[4869]: I0127 10:04:52.980826 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pv7vd"] Jan 27 10:04:53 crc kubenswrapper[4869]: I0127 10:04:53.363488 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-pv7vd" event={"ID":"427bab1b-d1a1-4106-8d8b-a6e34368576b","Type":"ContainerStarted","Data":"e81c2d69a7efbb6304d7dd2b3cb7ed1975105ddf932cf664cfee8f54dcc8abc7"} Jan 27 10:04:56 crc kubenswrapper[4869]: I0127 10:04:56.380371 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-pv7vd" event={"ID":"427bab1b-d1a1-4106-8d8b-a6e34368576b","Type":"ContainerStarted","Data":"08acfa66b32ee73cc9b2ba0ed350881eca17a4a8360a4dbd23b6b772aa0bf16b"} Jan 27 10:04:56 crc kubenswrapper[4869]: I0127 10:04:56.395893 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-pv7vd" podStartSLOduration=2.076795377 podStartE2EDuration="4.39587907s" podCreationTimestamp="2026-01-27 10:04:52 +0000 UTC" firstStartedPulling="2026-01-27 10:04:52.994478368 +0000 UTC m=+661.614902451" lastFinishedPulling="2026-01-27 10:04:55.313562061 +0000 UTC m=+663.933986144" observedRunningTime="2026-01-27 10:04:56.394758585 +0000 UTC m=+665.015182728" watchObservedRunningTime="2026-01-27 10:04:56.39587907 +0000 UTC m=+665.016303153" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.292722 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mnj56"] Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.294020 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mnj56" Jan 27 10:04:57 crc kubenswrapper[4869]: W0127 10:04:57.295638 4869 reflector.go:561] object-"openshift-nmstate"/"nmstate-handler-dockercfg-9j8ch": failed to list *v1.Secret: secrets "nmstate-handler-dockercfg-9j8ch" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-nmstate": no relationship found between node 'crc' and this object Jan 27 10:04:57 crc kubenswrapper[4869]: E0127 10:04:57.295681 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nmstate-handler-dockercfg-9j8ch\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nmstate-handler-dockercfg-9j8ch\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-nmstate\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.301728 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7brsd"] Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.302632 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7brsd" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.304449 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.315788 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mnj56"] Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.323251 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7brsd"] Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.341737 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-gksm8"] Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.342577 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gksm8" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.365582 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hclz\" (UniqueName: \"kubernetes.io/projected/66fb1b74-3877-435b-85b5-4321b9b074a8-kube-api-access-8hclz\") pod \"nmstate-metrics-54757c584b-mnj56\" (UID: \"66fb1b74-3877-435b-85b5-4321b9b074a8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mnj56" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.427349 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kndhh"] Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.428033 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kndhh" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.429762 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.430135 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.430768 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-h7s5q" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.439720 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kndhh"] Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.466773 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4e9e5492-1772-4814-81db-514251142de5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7brsd\" (UID: \"4e9e5492-1772-4814-81db-514251142de5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7brsd" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.466842 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzsj9\" (UniqueName: \"kubernetes.io/projected/51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862-kube-api-access-hzsj9\") pod \"nmstate-handler-gksm8\" (UID: \"51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862\") " pod="openshift-nmstate/nmstate-handler-gksm8" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.467004 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862-ovs-socket\") pod \"nmstate-handler-gksm8\" (UID: \"51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862\") " pod="openshift-nmstate/nmstate-handler-gksm8" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.467058 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862-dbus-socket\") pod \"nmstate-handler-gksm8\" (UID: \"51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862\") " pod="openshift-nmstate/nmstate-handler-gksm8" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.467086 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862-nmstate-lock\") pod \"nmstate-handler-gksm8\" (UID: \"51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862\") " pod="openshift-nmstate/nmstate-handler-gksm8" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.467132 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hclz\" (UniqueName: \"kubernetes.io/projected/66fb1b74-3877-435b-85b5-4321b9b074a8-kube-api-access-8hclz\") pod \"nmstate-metrics-54757c584b-mnj56\" (UID: \"66fb1b74-3877-435b-85b5-4321b9b074a8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mnj56" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.467292 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfcpw\" (UniqueName: \"kubernetes.io/projected/4e9e5492-1772-4814-81db-514251142de5-kube-api-access-kfcpw\") pod \"nmstate-webhook-8474b5b9d8-7brsd\" (UID: \"4e9e5492-1772-4814-81db-514251142de5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7brsd" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.484717 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hclz\" (UniqueName: \"kubernetes.io/projected/66fb1b74-3877-435b-85b5-4321b9b074a8-kube-api-access-8hclz\") pod \"nmstate-metrics-54757c584b-mnj56\" (UID: \"66fb1b74-3877-435b-85b5-4321b9b074a8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-mnj56" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.573517 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862-dbus-socket\") pod \"nmstate-handler-gksm8\" (UID: \"51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862\") " pod="openshift-nmstate/nmstate-handler-gksm8" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.573574 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862-nmstate-lock\") pod \"nmstate-handler-gksm8\" (UID: \"51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862\") " pod="openshift-nmstate/nmstate-handler-gksm8" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.573635 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kndhh\" (UID: \"f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kndhh" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.573676 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfcpw\" (UniqueName: \"kubernetes.io/projected/4e9e5492-1772-4814-81db-514251142de5-kube-api-access-kfcpw\") pod \"nmstate-webhook-8474b5b9d8-7brsd\" (UID: \"4e9e5492-1772-4814-81db-514251142de5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7brsd" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.573706 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4e9e5492-1772-4814-81db-514251142de5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7brsd\" (UID: \"4e9e5492-1772-4814-81db-514251142de5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7brsd" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.573728 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzsj9\" (UniqueName: \"kubernetes.io/projected/51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862-kube-api-access-hzsj9\") pod \"nmstate-handler-gksm8\" (UID: \"51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862\") " pod="openshift-nmstate/nmstate-handler-gksm8" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.573730 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862-nmstate-lock\") pod \"nmstate-handler-gksm8\" (UID: \"51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862\") " pod="openshift-nmstate/nmstate-handler-gksm8" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.573762 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rwz5\" (UniqueName: \"kubernetes.io/projected/f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd-kube-api-access-8rwz5\") pod \"nmstate-console-plugin-7754f76f8b-kndhh\" (UID: \"f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kndhh" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.573854 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862-dbus-socket\") pod \"nmstate-handler-gksm8\" (UID: \"51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862\") " pod="openshift-nmstate/nmstate-handler-gksm8" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.574005 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862-ovs-socket\") pod \"nmstate-handler-gksm8\" (UID: \"51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862\") " pod="openshift-nmstate/nmstate-handler-gksm8" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.574070 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kndhh\" (UID: \"f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kndhh" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.574125 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862-ovs-socket\") pod \"nmstate-handler-gksm8\" (UID: \"51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862\") " pod="openshift-nmstate/nmstate-handler-gksm8" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.577483 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4e9e5492-1772-4814-81db-514251142de5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-7brsd\" (UID: \"4e9e5492-1772-4814-81db-514251142de5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7brsd" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.590946 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfcpw\" (UniqueName: \"kubernetes.io/projected/4e9e5492-1772-4814-81db-514251142de5-kube-api-access-kfcpw\") pod \"nmstate-webhook-8474b5b9d8-7brsd\" (UID: \"4e9e5492-1772-4814-81db-514251142de5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7brsd" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.593312 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzsj9\" (UniqueName: \"kubernetes.io/projected/51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862-kube-api-access-hzsj9\") pod \"nmstate-handler-gksm8\" (UID: \"51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862\") " pod="openshift-nmstate/nmstate-handler-gksm8" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.625500 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76fc989f8f-dk5hr"] Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.626192 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.645770 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76fc989f8f-dk5hr"] Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.674875 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rwz5\" (UniqueName: \"kubernetes.io/projected/f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd-kube-api-access-8rwz5\") pod \"nmstate-console-plugin-7754f76f8b-kndhh\" (UID: \"f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kndhh" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.674953 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kndhh\" (UID: \"f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kndhh" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.675024 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kndhh\" (UID: \"f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kndhh" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.676118 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-kndhh\" (UID: \"f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kndhh" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.680459 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-kndhh\" (UID: \"f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kndhh" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.713583 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rwz5\" (UniqueName: \"kubernetes.io/projected/f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd-kube-api-access-8rwz5\") pod \"nmstate-console-plugin-7754f76f8b-kndhh\" (UID: \"f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kndhh" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.748131 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kndhh" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.775816 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8bad357b-3a99-4362-96e5-885c842d99fb-console-serving-cert\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.775897 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8bad357b-3a99-4362-96e5-885c842d99fb-console-config\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.775933 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bad357b-3a99-4362-96e5-885c842d99fb-trusted-ca-bundle\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.775955 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8bad357b-3a99-4362-96e5-885c842d99fb-console-oauth-config\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.775985 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxvhm\" (UniqueName: \"kubernetes.io/projected/8bad357b-3a99-4362-96e5-885c842d99fb-kube-api-access-kxvhm\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.776013 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8bad357b-3a99-4362-96e5-885c842d99fb-oauth-serving-cert\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.776042 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bad357b-3a99-4362-96e5-885c842d99fb-service-ca\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.876977 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bad357b-3a99-4362-96e5-885c842d99fb-trusted-ca-bundle\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.877297 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8bad357b-3a99-4362-96e5-885c842d99fb-console-oauth-config\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.877319 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxvhm\" (UniqueName: \"kubernetes.io/projected/8bad357b-3a99-4362-96e5-885c842d99fb-kube-api-access-kxvhm\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.877345 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8bad357b-3a99-4362-96e5-885c842d99fb-oauth-serving-cert\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.877499 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bad357b-3a99-4362-96e5-885c842d99fb-service-ca\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.877575 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8bad357b-3a99-4362-96e5-885c842d99fb-console-serving-cert\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.877604 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8bad357b-3a99-4362-96e5-885c842d99fb-console-config\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.879164 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8bad357b-3a99-4362-96e5-885c842d99fb-oauth-serving-cert\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.879193 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8bad357b-3a99-4362-96e5-885c842d99fb-console-config\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.880045 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bad357b-3a99-4362-96e5-885c842d99fb-service-ca\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.880608 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bad357b-3a99-4362-96e5-885c842d99fb-trusted-ca-bundle\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.883525 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8bad357b-3a99-4362-96e5-885c842d99fb-console-serving-cert\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.884172 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8bad357b-3a99-4362-96e5-885c842d99fb-console-oauth-config\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.900129 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxvhm\" (UniqueName: \"kubernetes.io/projected/8bad357b-3a99-4362-96e5-885c842d99fb-kube-api-access-kxvhm\") pod \"console-76fc989f8f-dk5hr\" (UID: \"8bad357b-3a99-4362-96e5-885c842d99fb\") " pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.947985 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:04:57 crc kubenswrapper[4869]: I0127 10:04:57.970750 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-kndhh"] Jan 27 10:04:57 crc kubenswrapper[4869]: W0127 10:04:57.976018 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9b6b6c3_2c4f_42f9_93a0_1f3b97f055cd.slice/crio-38709e788332ee9d93746998c0475f199ab1c01dfdcf1266cfe207af70252796 WatchSource:0}: Error finding container 38709e788332ee9d93746998c0475f199ab1c01dfdcf1266cfe207af70252796: Status 404 returned error can't find the container with id 38709e788332ee9d93746998c0475f199ab1c01dfdcf1266cfe207af70252796 Jan 27 10:04:58 crc kubenswrapper[4869]: I0127 10:04:58.138476 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76fc989f8f-dk5hr"] Jan 27 10:04:58 crc kubenswrapper[4869]: I0127 10:04:58.390448 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kndhh" event={"ID":"f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd","Type":"ContainerStarted","Data":"38709e788332ee9d93746998c0475f199ab1c01dfdcf1266cfe207af70252796"} Jan 27 10:04:58 crc kubenswrapper[4869]: I0127 10:04:58.396255 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76fc989f8f-dk5hr" event={"ID":"8bad357b-3a99-4362-96e5-885c842d99fb","Type":"ContainerStarted","Data":"39ef8cee4b5a28a29c395e95b5e324882e1b4cebccb0593c62c619c7e82fbb89"} Jan 27 10:04:58 crc kubenswrapper[4869]: I0127 10:04:58.396427 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76fc989f8f-dk5hr" event={"ID":"8bad357b-3a99-4362-96e5-885c842d99fb","Type":"ContainerStarted","Data":"d7fd01627b1cc86c31f16f968432707bfea829b21b8eecc0e92ea959473ab27d"} Jan 27 10:04:58 crc kubenswrapper[4869]: I0127 10:04:58.414254 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76fc989f8f-dk5hr" podStartSLOduration=1.414233804 podStartE2EDuration="1.414233804s" podCreationTimestamp="2026-01-27 10:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:04:58.411342425 +0000 UTC m=+667.031766518" watchObservedRunningTime="2026-01-27 10:04:58.414233804 +0000 UTC m=+667.034657897" Jan 27 10:04:58 crc kubenswrapper[4869]: I0127 10:04:58.516139 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-9j8ch" Jan 27 10:04:58 crc kubenswrapper[4869]: I0127 10:04:58.518531 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gksm8" Jan 27 10:04:58 crc kubenswrapper[4869]: I0127 10:04:58.524085 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7brsd" Jan 27 10:04:58 crc kubenswrapper[4869]: I0127 10:04:58.525861 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-mnj56" Jan 27 10:04:58 crc kubenswrapper[4869]: W0127 10:04:58.556665 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51ef0cba_8dc4_4ec0_bd06_2db6d2cf6862.slice/crio-55554d3694078eee06e2eac3c88687dcbf86fd023871beb25143f0e60d0c518c WatchSource:0}: Error finding container 55554d3694078eee06e2eac3c88687dcbf86fd023871beb25143f0e60d0c518c: Status 404 returned error can't find the container with id 55554d3694078eee06e2eac3c88687dcbf86fd023871beb25143f0e60d0c518c Jan 27 10:04:58 crc kubenswrapper[4869]: I0127 10:04:58.716688 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-7brsd"] Jan 27 10:04:58 crc kubenswrapper[4869]: W0127 10:04:58.723980 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9e5492_1772_4814_81db_514251142de5.slice/crio-67cb97d5c3b1fe4dc94d16b699aca6344b85c7908e0f831d415c108cd8849fc1 WatchSource:0}: Error finding container 67cb97d5c3b1fe4dc94d16b699aca6344b85c7908e0f831d415c108cd8849fc1: Status 404 returned error can't find the container with id 67cb97d5c3b1fe4dc94d16b699aca6344b85c7908e0f831d415c108cd8849fc1 Jan 27 10:04:58 crc kubenswrapper[4869]: I0127 10:04:58.762875 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-mnj56"] Jan 27 10:04:59 crc kubenswrapper[4869]: I0127 10:04:59.403431 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7brsd" event={"ID":"4e9e5492-1772-4814-81db-514251142de5","Type":"ContainerStarted","Data":"67cb97d5c3b1fe4dc94d16b699aca6344b85c7908e0f831d415c108cd8849fc1"} Jan 27 10:04:59 crc kubenswrapper[4869]: I0127 10:04:59.406109 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mnj56" event={"ID":"66fb1b74-3877-435b-85b5-4321b9b074a8","Type":"ContainerStarted","Data":"22e404117254966d10e530ba8d6499eca054e68833f5fb59e1957dd31f197758"} Jan 27 10:04:59 crc kubenswrapper[4869]: I0127 10:04:59.407603 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gksm8" event={"ID":"51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862","Type":"ContainerStarted","Data":"55554d3694078eee06e2eac3c88687dcbf86fd023871beb25143f0e60d0c518c"} Jan 27 10:05:00 crc kubenswrapper[4869]: I0127 10:05:00.413879 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kndhh" event={"ID":"f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd","Type":"ContainerStarted","Data":"96108e0c1f75be322c94d904e8584669fd944648feeae01fd31251d4edf21dda"} Jan 27 10:05:00 crc kubenswrapper[4869]: I0127 10:05:00.442288 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-kndhh" podStartSLOduration=1.538479551 podStartE2EDuration="3.442267766s" podCreationTimestamp="2026-01-27 10:04:57 +0000 UTC" firstStartedPulling="2026-01-27 10:04:57.979813128 +0000 UTC m=+666.600237211" lastFinishedPulling="2026-01-27 10:04:59.883601303 +0000 UTC m=+668.504025426" observedRunningTime="2026-01-27 10:05:00.437988374 +0000 UTC m=+669.058412457" watchObservedRunningTime="2026-01-27 10:05:00.442267766 +0000 UTC m=+669.062691849" Jan 27 10:05:01 crc kubenswrapper[4869]: I0127 10:05:01.422078 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gksm8" event={"ID":"51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862","Type":"ContainerStarted","Data":"168dd21e4e3816a84df8679d2621517b378f5b81f9299c268ec707aa7446d3b8"} Jan 27 10:05:01 crc kubenswrapper[4869]: I0127 10:05:01.422608 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-gksm8" Jan 27 10:05:01 crc kubenswrapper[4869]: I0127 10:05:01.424314 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7brsd" event={"ID":"4e9e5492-1772-4814-81db-514251142de5","Type":"ContainerStarted","Data":"6da3d56fcf8ac517986d1722af88cab125ab044eb912448b384e08f2d9cdbbc9"} Jan 27 10:05:01 crc kubenswrapper[4869]: I0127 10:05:01.424629 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7brsd" Jan 27 10:05:01 crc kubenswrapper[4869]: I0127 10:05:01.426232 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mnj56" event={"ID":"66fb1b74-3877-435b-85b5-4321b9b074a8","Type":"ContainerStarted","Data":"f32c82016eacb12cea48c21636a509a7f88e945cbad666e33927d51b1af88325"} Jan 27 10:05:01 crc kubenswrapper[4869]: I0127 10:05:01.436976 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-gksm8" podStartSLOduration=2.230352795 podStartE2EDuration="4.436960692s" podCreationTimestamp="2026-01-27 10:04:57 +0000 UTC" firstStartedPulling="2026-01-27 10:04:58.560643242 +0000 UTC m=+667.181067325" lastFinishedPulling="2026-01-27 10:05:00.767251149 +0000 UTC m=+669.387675222" observedRunningTime="2026-01-27 10:05:01.434356552 +0000 UTC m=+670.054780655" watchObservedRunningTime="2026-01-27 10:05:01.436960692 +0000 UTC m=+670.057384775" Jan 27 10:05:01 crc kubenswrapper[4869]: I0127 10:05:01.456908 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7brsd" podStartSLOduration=2.392544918 podStartE2EDuration="4.456892165s" podCreationTimestamp="2026-01-27 10:04:57 +0000 UTC" firstStartedPulling="2026-01-27 10:04:58.725391443 +0000 UTC m=+667.345815526" lastFinishedPulling="2026-01-27 10:05:00.78973869 +0000 UTC m=+669.410162773" observedRunningTime="2026-01-27 10:05:01.454917714 +0000 UTC m=+670.075341807" watchObservedRunningTime="2026-01-27 10:05:01.456892165 +0000 UTC m=+670.077316248" Jan 27 10:05:05 crc kubenswrapper[4869]: I0127 10:05:05.454854 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-mnj56" event={"ID":"66fb1b74-3877-435b-85b5-4321b9b074a8","Type":"ContainerStarted","Data":"c7f5db21c846b2a00a55c2974e54a7ddc4687ad56201cee380820416330e18d7"} Jan 27 10:05:05 crc kubenswrapper[4869]: I0127 10:05:05.483456 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-mnj56" podStartSLOduration=2.344856623 podStartE2EDuration="8.483435329s" podCreationTimestamp="2026-01-27 10:04:57 +0000 UTC" firstStartedPulling="2026-01-27 10:04:58.773350516 +0000 UTC m=+667.393774599" lastFinishedPulling="2026-01-27 10:05:04.911929232 +0000 UTC m=+673.532353305" observedRunningTime="2026-01-27 10:05:05.4802215 +0000 UTC m=+674.100645583" watchObservedRunningTime="2026-01-27 10:05:05.483435329 +0000 UTC m=+674.103859422" Jan 27 10:05:07 crc kubenswrapper[4869]: I0127 10:05:07.948555 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:05:07 crc kubenswrapper[4869]: I0127 10:05:07.948918 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:05:07 crc kubenswrapper[4869]: I0127 10:05:07.952277 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:05:08 crc kubenswrapper[4869]: I0127 10:05:08.475154 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-76fc989f8f-dk5hr" Jan 27 10:05:08 crc kubenswrapper[4869]: I0127 10:05:08.547655 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-q86c4"] Jan 27 10:05:08 crc kubenswrapper[4869]: I0127 10:05:08.581746 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-gksm8" Jan 27 10:05:18 crc kubenswrapper[4869]: I0127 10:05:18.533749 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-7brsd" Jan 27 10:05:30 crc kubenswrapper[4869]: I0127 10:05:30.771889 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh"] Jan 27 10:05:30 crc kubenswrapper[4869]: I0127 10:05:30.773557 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" Jan 27 10:05:30 crc kubenswrapper[4869]: I0127 10:05:30.775301 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 10:05:30 crc kubenswrapper[4869]: I0127 10:05:30.780508 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh"] Jan 27 10:05:30 crc kubenswrapper[4869]: I0127 10:05:30.935248 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh\" (UID: \"45ab252a-dc37-43ef-8c03-5fc40a7d6d89\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" Jan 27 10:05:30 crc kubenswrapper[4869]: I0127 10:05:30.935309 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45wpn\" (UniqueName: \"kubernetes.io/projected/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-kube-api-access-45wpn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh\" (UID: \"45ab252a-dc37-43ef-8c03-5fc40a7d6d89\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" Jan 27 10:05:30 crc kubenswrapper[4869]: I0127 10:05:30.935359 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh\" (UID: \"45ab252a-dc37-43ef-8c03-5fc40a7d6d89\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" Jan 27 10:05:31 crc kubenswrapper[4869]: I0127 10:05:31.036412 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45wpn\" (UniqueName: \"kubernetes.io/projected/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-kube-api-access-45wpn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh\" (UID: \"45ab252a-dc37-43ef-8c03-5fc40a7d6d89\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" Jan 27 10:05:31 crc kubenswrapper[4869]: I0127 10:05:31.036491 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh\" (UID: \"45ab252a-dc37-43ef-8c03-5fc40a7d6d89\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" Jan 27 10:05:31 crc kubenswrapper[4869]: I0127 10:05:31.036560 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh\" (UID: \"45ab252a-dc37-43ef-8c03-5fc40a7d6d89\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" Jan 27 10:05:31 crc kubenswrapper[4869]: I0127 10:05:31.037197 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh\" (UID: \"45ab252a-dc37-43ef-8c03-5fc40a7d6d89\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" Jan 27 10:05:31 crc kubenswrapper[4869]: I0127 10:05:31.037201 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh\" (UID: \"45ab252a-dc37-43ef-8c03-5fc40a7d6d89\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" Jan 27 10:05:31 crc kubenswrapper[4869]: I0127 10:05:31.059776 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45wpn\" (UniqueName: \"kubernetes.io/projected/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-kube-api-access-45wpn\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh\" (UID: \"45ab252a-dc37-43ef-8c03-5fc40a7d6d89\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" Jan 27 10:05:31 crc kubenswrapper[4869]: I0127 10:05:31.121259 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" Jan 27 10:05:31 crc kubenswrapper[4869]: I0127 10:05:31.333964 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh"] Jan 27 10:05:31 crc kubenswrapper[4869]: I0127 10:05:31.616212 4869 generic.go:334] "Generic (PLEG): container finished" podID="45ab252a-dc37-43ef-8c03-5fc40a7d6d89" containerID="e6d4e14d8b69a2dfeb8ded25aac332d76fde7bfb2bfb8beeca901795399a72a8" exitCode=0 Jan 27 10:05:31 crc kubenswrapper[4869]: I0127 10:05:31.616291 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" event={"ID":"45ab252a-dc37-43ef-8c03-5fc40a7d6d89","Type":"ContainerDied","Data":"e6d4e14d8b69a2dfeb8ded25aac332d76fde7bfb2bfb8beeca901795399a72a8"} Jan 27 10:05:31 crc kubenswrapper[4869]: I0127 10:05:31.616342 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" event={"ID":"45ab252a-dc37-43ef-8c03-5fc40a7d6d89","Type":"ContainerStarted","Data":"9483cba57c6efce2b082d4d04942ed73903b69602c28053bc9e5924a0f75af91"} Jan 27 10:05:33 crc kubenswrapper[4869]: I0127 10:05:33.629079 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-q86c4" podUID="b6851779-1393-4518-be8b-519296708bd7" containerName="console" containerID="cri-o://0d63e854ffe4c4dfec2ef132ae243942f960bac69c5ca0f8541ecf16d9ea9a48" gracePeriod=15 Jan 27 10:05:33 crc kubenswrapper[4869]: I0127 10:05:33.630482 4869 generic.go:334] "Generic (PLEG): container finished" podID="45ab252a-dc37-43ef-8c03-5fc40a7d6d89" containerID="578417aecc64ef4630b966951da55d9ac65ecf35c1ebffe634f3089a03d1cf85" exitCode=0 Jan 27 10:05:33 crc kubenswrapper[4869]: I0127 10:05:33.630530 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" event={"ID":"45ab252a-dc37-43ef-8c03-5fc40a7d6d89","Type":"ContainerDied","Data":"578417aecc64ef4630b966951da55d9ac65ecf35c1ebffe634f3089a03d1cf85"} Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.060451 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-q86c4_b6851779-1393-4518-be8b-519296708bd7/console/0.log" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.060825 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q86c4" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.177375 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b6851779-1393-4518-be8b-519296708bd7-console-oauth-config\") pod \"b6851779-1393-4518-be8b-519296708bd7\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.177454 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-trusted-ca-bundle\") pod \"b6851779-1393-4518-be8b-519296708bd7\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.177488 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-console-config\") pod \"b6851779-1393-4518-be8b-519296708bd7\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.177537 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-service-ca\") pod \"b6851779-1393-4518-be8b-519296708bd7\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.177586 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-oauth-serving-cert\") pod \"b6851779-1393-4518-be8b-519296708bd7\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.177613 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknxj\" (UniqueName: \"kubernetes.io/projected/b6851779-1393-4518-be8b-519296708bd7-kube-api-access-tknxj\") pod \"b6851779-1393-4518-be8b-519296708bd7\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.177637 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6851779-1393-4518-be8b-519296708bd7-console-serving-cert\") pod \"b6851779-1393-4518-be8b-519296708bd7\" (UID: \"b6851779-1393-4518-be8b-519296708bd7\") " Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.178669 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-service-ca" (OuterVolumeSpecName: "service-ca") pod "b6851779-1393-4518-be8b-519296708bd7" (UID: "b6851779-1393-4518-be8b-519296708bd7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.178698 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-console-config" (OuterVolumeSpecName: "console-config") pod "b6851779-1393-4518-be8b-519296708bd7" (UID: "b6851779-1393-4518-be8b-519296708bd7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.178684 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b6851779-1393-4518-be8b-519296708bd7" (UID: "b6851779-1393-4518-be8b-519296708bd7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.178949 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b6851779-1393-4518-be8b-519296708bd7" (UID: "b6851779-1393-4518-be8b-519296708bd7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.184121 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6851779-1393-4518-be8b-519296708bd7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b6851779-1393-4518-be8b-519296708bd7" (UID: "b6851779-1393-4518-be8b-519296708bd7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.186527 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6851779-1393-4518-be8b-519296708bd7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b6851779-1393-4518-be8b-519296708bd7" (UID: "b6851779-1393-4518-be8b-519296708bd7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.186581 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6851779-1393-4518-be8b-519296708bd7-kube-api-access-tknxj" (OuterVolumeSpecName: "kube-api-access-tknxj") pod "b6851779-1393-4518-be8b-519296708bd7" (UID: "b6851779-1393-4518-be8b-519296708bd7"). InnerVolumeSpecName "kube-api-access-tknxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.278749 4869 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.278788 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tknxj\" (UniqueName: \"kubernetes.io/projected/b6851779-1393-4518-be8b-519296708bd7-kube-api-access-tknxj\") on node \"crc\" DevicePath \"\"" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.278802 4869 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6851779-1393-4518-be8b-519296708bd7-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.278811 4869 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b6851779-1393-4518-be8b-519296708bd7-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.278819 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.278827 4869 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.278846 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6851779-1393-4518-be8b-519296708bd7-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.640435 4869 generic.go:334] "Generic (PLEG): container finished" podID="45ab252a-dc37-43ef-8c03-5fc40a7d6d89" containerID="2397d39e4c5102b702768f3567fb9b091819e36b39de0ea76fde5c981c60f002" exitCode=0 Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.640513 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" event={"ID":"45ab252a-dc37-43ef-8c03-5fc40a7d6d89","Type":"ContainerDied","Data":"2397d39e4c5102b702768f3567fb9b091819e36b39de0ea76fde5c981c60f002"} Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.642934 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-q86c4_b6851779-1393-4518-be8b-519296708bd7/console/0.log" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.642975 4869 generic.go:334] "Generic (PLEG): container finished" podID="b6851779-1393-4518-be8b-519296708bd7" containerID="0d63e854ffe4c4dfec2ef132ae243942f960bac69c5ca0f8541ecf16d9ea9a48" exitCode=2 Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.643057 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q86c4" event={"ID":"b6851779-1393-4518-be8b-519296708bd7","Type":"ContainerDied","Data":"0d63e854ffe4c4dfec2ef132ae243942f960bac69c5ca0f8541ecf16d9ea9a48"} Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.643089 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-q86c4" event={"ID":"b6851779-1393-4518-be8b-519296708bd7","Type":"ContainerDied","Data":"5f23d6383ba9588cc57faf791153faa0f85b811c41a3606c51411120054c2450"} Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.643112 4869 scope.go:117] "RemoveContainer" containerID="0d63e854ffe4c4dfec2ef132ae243942f960bac69c5ca0f8541ecf16d9ea9a48" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.643246 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-q86c4" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.673643 4869 scope.go:117] "RemoveContainer" containerID="0d63e854ffe4c4dfec2ef132ae243942f960bac69c5ca0f8541ecf16d9ea9a48" Jan 27 10:05:34 crc kubenswrapper[4869]: E0127 10:05:34.674475 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d63e854ffe4c4dfec2ef132ae243942f960bac69c5ca0f8541ecf16d9ea9a48\": container with ID starting with 0d63e854ffe4c4dfec2ef132ae243942f960bac69c5ca0f8541ecf16d9ea9a48 not found: ID does not exist" containerID="0d63e854ffe4c4dfec2ef132ae243942f960bac69c5ca0f8541ecf16d9ea9a48" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.674528 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d63e854ffe4c4dfec2ef132ae243942f960bac69c5ca0f8541ecf16d9ea9a48"} err="failed to get container status \"0d63e854ffe4c4dfec2ef132ae243942f960bac69c5ca0f8541ecf16d9ea9a48\": rpc error: code = NotFound desc = could not find container \"0d63e854ffe4c4dfec2ef132ae243942f960bac69c5ca0f8541ecf16d9ea9a48\": container with ID starting with 0d63e854ffe4c4dfec2ef132ae243942f960bac69c5ca0f8541ecf16d9ea9a48 not found: ID does not exist" Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.686266 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-q86c4"] Jan 27 10:05:34 crc kubenswrapper[4869]: I0127 10:05:34.693196 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-q86c4"] Jan 27 10:05:35 crc kubenswrapper[4869]: I0127 10:05:35.881114 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" Jan 27 10:05:36 crc kubenswrapper[4869]: I0127 10:05:36.001416 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45wpn\" (UniqueName: \"kubernetes.io/projected/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-kube-api-access-45wpn\") pod \"45ab252a-dc37-43ef-8c03-5fc40a7d6d89\" (UID: \"45ab252a-dc37-43ef-8c03-5fc40a7d6d89\") " Jan 27 10:05:36 crc kubenswrapper[4869]: I0127 10:05:36.002397 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-bundle\") pod \"45ab252a-dc37-43ef-8c03-5fc40a7d6d89\" (UID: \"45ab252a-dc37-43ef-8c03-5fc40a7d6d89\") " Jan 27 10:05:36 crc kubenswrapper[4869]: I0127 10:05:36.002649 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-util\") pod \"45ab252a-dc37-43ef-8c03-5fc40a7d6d89\" (UID: \"45ab252a-dc37-43ef-8c03-5fc40a7d6d89\") " Jan 27 10:05:36 crc kubenswrapper[4869]: I0127 10:05:36.003460 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-bundle" (OuterVolumeSpecName: "bundle") pod "45ab252a-dc37-43ef-8c03-5fc40a7d6d89" (UID: "45ab252a-dc37-43ef-8c03-5fc40a7d6d89"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:05:36 crc kubenswrapper[4869]: I0127 10:05:36.010649 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-kube-api-access-45wpn" (OuterVolumeSpecName: "kube-api-access-45wpn") pod "45ab252a-dc37-43ef-8c03-5fc40a7d6d89" (UID: "45ab252a-dc37-43ef-8c03-5fc40a7d6d89"). InnerVolumeSpecName "kube-api-access-45wpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:05:36 crc kubenswrapper[4869]: I0127 10:05:36.015348 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-util" (OuterVolumeSpecName: "util") pod "45ab252a-dc37-43ef-8c03-5fc40a7d6d89" (UID: "45ab252a-dc37-43ef-8c03-5fc40a7d6d89"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:05:36 crc kubenswrapper[4869]: I0127 10:05:36.043149 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6851779-1393-4518-be8b-519296708bd7" path="/var/lib/kubelet/pods/b6851779-1393-4518-be8b-519296708bd7/volumes" Jan 27 10:05:36 crc kubenswrapper[4869]: I0127 10:05:36.104281 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-util\") on node \"crc\" DevicePath \"\"" Jan 27 10:05:36 crc kubenswrapper[4869]: I0127 10:05:36.104338 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45wpn\" (UniqueName: \"kubernetes.io/projected/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-kube-api-access-45wpn\") on node \"crc\" DevicePath \"\"" Jan 27 10:05:36 crc kubenswrapper[4869]: I0127 10:05:36.104354 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45ab252a-dc37-43ef-8c03-5fc40a7d6d89-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 10:05:36 crc kubenswrapper[4869]: I0127 10:05:36.665195 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" event={"ID":"45ab252a-dc37-43ef-8c03-5fc40a7d6d89","Type":"ContainerDied","Data":"9483cba57c6efce2b082d4d04942ed73903b69602c28053bc9e5924a0f75af91"} Jan 27 10:05:36 crc kubenswrapper[4869]: I0127 10:05:36.665235 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9483cba57c6efce2b082d4d04942ed73903b69602c28053bc9e5924a0f75af91" Jan 27 10:05:36 crc kubenswrapper[4869]: I0127 10:05:36.665348 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.421021 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv"] Jan 27 10:05:45 crc kubenswrapper[4869]: E0127 10:05:45.421701 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ab252a-dc37-43ef-8c03-5fc40a7d6d89" containerName="util" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.421713 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ab252a-dc37-43ef-8c03-5fc40a7d6d89" containerName="util" Jan 27 10:05:45 crc kubenswrapper[4869]: E0127 10:05:45.421728 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ab252a-dc37-43ef-8c03-5fc40a7d6d89" containerName="extract" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.421733 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ab252a-dc37-43ef-8c03-5fc40a7d6d89" containerName="extract" Jan 27 10:05:45 crc kubenswrapper[4869]: E0127 10:05:45.421743 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ab252a-dc37-43ef-8c03-5fc40a7d6d89" containerName="pull" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.421750 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ab252a-dc37-43ef-8c03-5fc40a7d6d89" containerName="pull" Jan 27 10:05:45 crc kubenswrapper[4869]: E0127 10:05:45.421764 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6851779-1393-4518-be8b-519296708bd7" containerName="console" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.421769 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6851779-1393-4518-be8b-519296708bd7" containerName="console" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.421875 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ab252a-dc37-43ef-8c03-5fc40a7d6d89" containerName="extract" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.421884 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6851779-1393-4518-be8b-519296708bd7" containerName="console" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.422282 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.424146 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.424336 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.424657 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.425107 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.425313 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-57dn2" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.439919 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv"] Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.521264 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f8c9dcc8-f88f-4243-8be7-81ce1b582448-webhook-cert\") pod \"metallb-operator-controller-manager-66ccd9d9b6-msfrv\" (UID: \"f8c9dcc8-f88f-4243-8be7-81ce1b582448\") " pod="metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.521365 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f8c9dcc8-f88f-4243-8be7-81ce1b582448-apiservice-cert\") pod \"metallb-operator-controller-manager-66ccd9d9b6-msfrv\" (UID: \"f8c9dcc8-f88f-4243-8be7-81ce1b582448\") " pod="metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.521394 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrfv8\" (UniqueName: \"kubernetes.io/projected/f8c9dcc8-f88f-4243-8be7-81ce1b582448-kube-api-access-hrfv8\") pod \"metallb-operator-controller-manager-66ccd9d9b6-msfrv\" (UID: \"f8c9dcc8-f88f-4243-8be7-81ce1b582448\") " pod="metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.623060 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f8c9dcc8-f88f-4243-8be7-81ce1b582448-apiservice-cert\") pod \"metallb-operator-controller-manager-66ccd9d9b6-msfrv\" (UID: \"f8c9dcc8-f88f-4243-8be7-81ce1b582448\") " pod="metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.623119 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrfv8\" (UniqueName: \"kubernetes.io/projected/f8c9dcc8-f88f-4243-8be7-81ce1b582448-kube-api-access-hrfv8\") pod \"metallb-operator-controller-manager-66ccd9d9b6-msfrv\" (UID: \"f8c9dcc8-f88f-4243-8be7-81ce1b582448\") " pod="metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.623176 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f8c9dcc8-f88f-4243-8be7-81ce1b582448-webhook-cert\") pod \"metallb-operator-controller-manager-66ccd9d9b6-msfrv\" (UID: \"f8c9dcc8-f88f-4243-8be7-81ce1b582448\") " pod="metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.628593 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f8c9dcc8-f88f-4243-8be7-81ce1b582448-webhook-cert\") pod \"metallb-operator-controller-manager-66ccd9d9b6-msfrv\" (UID: \"f8c9dcc8-f88f-4243-8be7-81ce1b582448\") " pod="metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.628611 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f8c9dcc8-f88f-4243-8be7-81ce1b582448-apiservice-cert\") pod \"metallb-operator-controller-manager-66ccd9d9b6-msfrv\" (UID: \"f8c9dcc8-f88f-4243-8be7-81ce1b582448\") " pod="metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.638429 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrfv8\" (UniqueName: \"kubernetes.io/projected/f8c9dcc8-f88f-4243-8be7-81ce1b582448-kube-api-access-hrfv8\") pod \"metallb-operator-controller-manager-66ccd9d9b6-msfrv\" (UID: \"f8c9dcc8-f88f-4243-8be7-81ce1b582448\") " pod="metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.698134 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.698221 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.737385 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.862071 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn"] Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.863264 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.868965 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-q8q9r" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.868978 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.869195 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.871851 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn"] Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.927534 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a4d3a261-e179-4022-90d9-bacdc6673d2e-apiservice-cert\") pod \"metallb-operator-webhook-server-65584b46bc-jsdnn\" (UID: \"a4d3a261-e179-4022-90d9-bacdc6673d2e\") " pod="metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.927659 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl5mp\" (UniqueName: \"kubernetes.io/projected/a4d3a261-e179-4022-90d9-bacdc6673d2e-kube-api-access-bl5mp\") pod \"metallb-operator-webhook-server-65584b46bc-jsdnn\" (UID: \"a4d3a261-e179-4022-90d9-bacdc6673d2e\") " pod="metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn" Jan 27 10:05:45 crc kubenswrapper[4869]: I0127 10:05:45.927687 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a4d3a261-e179-4022-90d9-bacdc6673d2e-webhook-cert\") pod \"metallb-operator-webhook-server-65584b46bc-jsdnn\" (UID: \"a4d3a261-e179-4022-90d9-bacdc6673d2e\") " pod="metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn" Jan 27 10:05:46 crc kubenswrapper[4869]: I0127 10:05:46.015756 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv"] Jan 27 10:05:46 crc kubenswrapper[4869]: I0127 10:05:46.028357 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl5mp\" (UniqueName: \"kubernetes.io/projected/a4d3a261-e179-4022-90d9-bacdc6673d2e-kube-api-access-bl5mp\") pod \"metallb-operator-webhook-server-65584b46bc-jsdnn\" (UID: \"a4d3a261-e179-4022-90d9-bacdc6673d2e\") " pod="metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn" Jan 27 10:05:46 crc kubenswrapper[4869]: I0127 10:05:46.028389 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a4d3a261-e179-4022-90d9-bacdc6673d2e-webhook-cert\") pod \"metallb-operator-webhook-server-65584b46bc-jsdnn\" (UID: \"a4d3a261-e179-4022-90d9-bacdc6673d2e\") " pod="metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn" Jan 27 10:05:46 crc kubenswrapper[4869]: I0127 10:05:46.028410 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a4d3a261-e179-4022-90d9-bacdc6673d2e-apiservice-cert\") pod \"metallb-operator-webhook-server-65584b46bc-jsdnn\" (UID: \"a4d3a261-e179-4022-90d9-bacdc6673d2e\") " pod="metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn" Jan 27 10:05:46 crc kubenswrapper[4869]: I0127 10:05:46.033943 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a4d3a261-e179-4022-90d9-bacdc6673d2e-webhook-cert\") pod \"metallb-operator-webhook-server-65584b46bc-jsdnn\" (UID: \"a4d3a261-e179-4022-90d9-bacdc6673d2e\") " pod="metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn" Jan 27 10:05:46 crc kubenswrapper[4869]: I0127 10:05:46.037134 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a4d3a261-e179-4022-90d9-bacdc6673d2e-apiservice-cert\") pod \"metallb-operator-webhook-server-65584b46bc-jsdnn\" (UID: \"a4d3a261-e179-4022-90d9-bacdc6673d2e\") " pod="metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn" Jan 27 10:05:46 crc kubenswrapper[4869]: I0127 10:05:46.044212 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl5mp\" (UniqueName: \"kubernetes.io/projected/a4d3a261-e179-4022-90d9-bacdc6673d2e-kube-api-access-bl5mp\") pod \"metallb-operator-webhook-server-65584b46bc-jsdnn\" (UID: \"a4d3a261-e179-4022-90d9-bacdc6673d2e\") " pod="metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn" Jan 27 10:05:46 crc kubenswrapper[4869]: I0127 10:05:46.184488 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn" Jan 27 10:05:46 crc kubenswrapper[4869]: I0127 10:05:46.606775 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn"] Jan 27 10:05:46 crc kubenswrapper[4869]: W0127 10:05:46.612031 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4d3a261_e179_4022_90d9_bacdc6673d2e.slice/crio-a26099afc2911fce694a98f61e8fc3263fd67e0704a06eaab8c66b052d7d467c WatchSource:0}: Error finding container a26099afc2911fce694a98f61e8fc3263fd67e0704a06eaab8c66b052d7d467c: Status 404 returned error can't find the container with id a26099afc2911fce694a98f61e8fc3263fd67e0704a06eaab8c66b052d7d467c Jan 27 10:05:46 crc kubenswrapper[4869]: I0127 10:05:46.715860 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv" event={"ID":"f8c9dcc8-f88f-4243-8be7-81ce1b582448","Type":"ContainerStarted","Data":"92c37c95f86248610f7369df7164c3f0467483d91a5a99f4fa738b174ccb3c24"} Jan 27 10:05:46 crc kubenswrapper[4869]: I0127 10:05:46.717097 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn" event={"ID":"a4d3a261-e179-4022-90d9-bacdc6673d2e","Type":"ContainerStarted","Data":"a26099afc2911fce694a98f61e8fc3263fd67e0704a06eaab8c66b052d7d467c"} Jan 27 10:05:49 crc kubenswrapper[4869]: I0127 10:05:49.736888 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv" event={"ID":"f8c9dcc8-f88f-4243-8be7-81ce1b582448","Type":"ContainerStarted","Data":"b13d025b920cdd6ebcce88edbc49e774e1e20982309593c7f2862d618857cee0"} Jan 27 10:05:49 crc kubenswrapper[4869]: I0127 10:05:49.737509 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv" Jan 27 10:05:49 crc kubenswrapper[4869]: I0127 10:05:49.769289 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv" podStartSLOduration=1.932907157 podStartE2EDuration="4.769267778s" podCreationTimestamp="2026-01-27 10:05:45 +0000 UTC" firstStartedPulling="2026-01-27 10:05:46.026125711 +0000 UTC m=+714.646549794" lastFinishedPulling="2026-01-27 10:05:48.862486332 +0000 UTC m=+717.482910415" observedRunningTime="2026-01-27 10:05:49.761029225 +0000 UTC m=+718.381453318" watchObservedRunningTime="2026-01-27 10:05:49.769267778 +0000 UTC m=+718.389691861" Jan 27 10:05:51 crc kubenswrapper[4869]: I0127 10:05:51.750626 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn" event={"ID":"a4d3a261-e179-4022-90d9-bacdc6673d2e","Type":"ContainerStarted","Data":"399267d1c4881084185cb63e59aec3a14c2251d02fec587a3e5691dab6fb05c6"} Jan 27 10:05:51 crc kubenswrapper[4869]: I0127 10:05:51.751452 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn" Jan 27 10:05:51 crc kubenswrapper[4869]: I0127 10:05:51.777321 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn" podStartSLOduration=2.366551625 podStartE2EDuration="6.777296645s" podCreationTimestamp="2026-01-27 10:05:45 +0000 UTC" firstStartedPulling="2026-01-27 10:05:46.616072635 +0000 UTC m=+715.236496728" lastFinishedPulling="2026-01-27 10:05:51.026817655 +0000 UTC m=+719.647241748" observedRunningTime="2026-01-27 10:05:51.775090877 +0000 UTC m=+720.395514990" watchObservedRunningTime="2026-01-27 10:05:51.777296645 +0000 UTC m=+720.397720738" Jan 27 10:06:06 crc kubenswrapper[4869]: I0127 10:06:06.188682 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-65584b46bc-jsdnn" Jan 27 10:06:15 crc kubenswrapper[4869]: I0127 10:06:15.697907 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:06:15 crc kubenswrapper[4869]: I0127 10:06:15.698567 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:06:25 crc kubenswrapper[4869]: I0127 10:06:25.740244 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-66ccd9d9b6-msfrv" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.513708 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-b4fnc"] Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.516626 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.518122 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.518561 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-mndrn" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.518872 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.537506 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-vtdm4"] Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.538720 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vtdm4" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.540413 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.546354 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-vtdm4"] Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.599638 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/96c9106e-9af3-468a-8a06-4fbc013ab6d1-metrics\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.599678 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/96c9106e-9af3-468a-8a06-4fbc013ab6d1-frr-startup\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.599712 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnfq4\" (UniqueName: \"kubernetes.io/projected/96c9106e-9af3-468a-8a06-4fbc013ab6d1-kube-api-access-wnfq4\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.599733 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/96c9106e-9af3-468a-8a06-4fbc013ab6d1-frr-sockets\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.599767 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96c9106e-9af3-468a-8a06-4fbc013ab6d1-metrics-certs\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.599795 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/96c9106e-9af3-468a-8a06-4fbc013ab6d1-frr-conf\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.599900 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/96c9106e-9af3-468a-8a06-4fbc013ab6d1-reloader\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.599954 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2m5w\" (UniqueName: \"kubernetes.io/projected/4558cbce-4dbb-4621-a880-674cc8ea8353-kube-api-access-j2m5w\") pod \"frr-k8s-webhook-server-7df86c4f6c-vtdm4\" (UID: \"4558cbce-4dbb-4621-a880-674cc8ea8353\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vtdm4" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.600008 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4558cbce-4dbb-4621-a880-674cc8ea8353-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vtdm4\" (UID: \"4558cbce-4dbb-4621-a880-674cc8ea8353\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vtdm4" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.603370 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-kbqgv"] Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.604285 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kbqgv" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.605915 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.606112 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-trrcw" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.606542 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.606719 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.614556 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-4lv4g"] Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.615448 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-4lv4g" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.616920 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.643979 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-4lv4g"] Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.701858 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/96c9106e-9af3-468a-8a06-4fbc013ab6d1-frr-sockets\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.701928 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96c9106e-9af3-468a-8a06-4fbc013ab6d1-metrics-certs\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.701959 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a494a726-4ad9-4a6a-a91a-bd3a8865d1af-cert\") pod \"controller-6968d8fdc4-4lv4g\" (UID: \"a494a726-4ad9-4a6a-a91a-bd3a8865d1af\") " pod="metallb-system/controller-6968d8fdc4-4lv4g" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.701995 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkzml\" (UniqueName: \"kubernetes.io/projected/a494a726-4ad9-4a6a-a91a-bd3a8865d1af-kube-api-access-fkzml\") pod \"controller-6968d8fdc4-4lv4g\" (UID: \"a494a726-4ad9-4a6a-a91a-bd3a8865d1af\") " pod="metallb-system/controller-6968d8fdc4-4lv4g" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.702025 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a494a726-4ad9-4a6a-a91a-bd3a8865d1af-metrics-certs\") pod \"controller-6968d8fdc4-4lv4g\" (UID: \"a494a726-4ad9-4a6a-a91a-bd3a8865d1af\") " pod="metallb-system/controller-6968d8fdc4-4lv4g" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.702049 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/96c9106e-9af3-468a-8a06-4fbc013ab6d1-frr-conf\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.702077 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/96c9106e-9af3-468a-8a06-4fbc013ab6d1-reloader\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.702101 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2m5w\" (UniqueName: \"kubernetes.io/projected/4558cbce-4dbb-4621-a880-674cc8ea8353-kube-api-access-j2m5w\") pod \"frr-k8s-webhook-server-7df86c4f6c-vtdm4\" (UID: \"4558cbce-4dbb-4621-a880-674cc8ea8353\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vtdm4" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.702135 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4558cbce-4dbb-4621-a880-674cc8ea8353-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vtdm4\" (UID: \"4558cbce-4dbb-4621-a880-674cc8ea8353\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vtdm4" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.702164 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/96c9106e-9af3-468a-8a06-4fbc013ab6d1-metrics\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.702184 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/96c9106e-9af3-468a-8a06-4fbc013ab6d1-frr-startup\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.702217 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnfq4\" (UniqueName: \"kubernetes.io/projected/96c9106e-9af3-468a-8a06-4fbc013ab6d1-kube-api-access-wnfq4\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.703117 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/96c9106e-9af3-468a-8a06-4fbc013ab6d1-frr-sockets\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: E0127 10:06:26.703202 4869 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 27 10:06:26 crc kubenswrapper[4869]: E0127 10:06:26.703250 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96c9106e-9af3-468a-8a06-4fbc013ab6d1-metrics-certs podName:96c9106e-9af3-468a-8a06-4fbc013ab6d1 nodeName:}" failed. No retries permitted until 2026-01-27 10:06:27.203235283 +0000 UTC m=+755.823659366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96c9106e-9af3-468a-8a06-4fbc013ab6d1-metrics-certs") pod "frr-k8s-b4fnc" (UID: "96c9106e-9af3-468a-8a06-4fbc013ab6d1") : secret "frr-k8s-certs-secret" not found Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.703630 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/96c9106e-9af3-468a-8a06-4fbc013ab6d1-frr-conf\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.703862 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/96c9106e-9af3-468a-8a06-4fbc013ab6d1-reloader\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: E0127 10:06:26.704075 4869 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 27 10:06:26 crc kubenswrapper[4869]: E0127 10:06:26.704112 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4558cbce-4dbb-4621-a880-674cc8ea8353-cert podName:4558cbce-4dbb-4621-a880-674cc8ea8353 nodeName:}" failed. No retries permitted until 2026-01-27 10:06:27.20410236 +0000 UTC m=+755.824526443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4558cbce-4dbb-4621-a880-674cc8ea8353-cert") pod "frr-k8s-webhook-server-7df86c4f6c-vtdm4" (UID: "4558cbce-4dbb-4621-a880-674cc8ea8353") : secret "frr-k8s-webhook-server-cert" not found Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.704322 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/96c9106e-9af3-468a-8a06-4fbc013ab6d1-metrics\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.705227 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/96c9106e-9af3-468a-8a06-4fbc013ab6d1-frr-startup\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.719965 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnfq4\" (UniqueName: \"kubernetes.io/projected/96c9106e-9af3-468a-8a06-4fbc013ab6d1-kube-api-access-wnfq4\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.721578 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2m5w\" (UniqueName: \"kubernetes.io/projected/4558cbce-4dbb-4621-a880-674cc8ea8353-kube-api-access-j2m5w\") pod \"frr-k8s-webhook-server-7df86c4f6c-vtdm4\" (UID: \"4558cbce-4dbb-4621-a880-674cc8ea8353\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vtdm4" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.803034 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkzml\" (UniqueName: \"kubernetes.io/projected/a494a726-4ad9-4a6a-a91a-bd3a8865d1af-kube-api-access-fkzml\") pod \"controller-6968d8fdc4-4lv4g\" (UID: \"a494a726-4ad9-4a6a-a91a-bd3a8865d1af\") " pod="metallb-system/controller-6968d8fdc4-4lv4g" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.803072 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a494a726-4ad9-4a6a-a91a-bd3a8865d1af-cert\") pod \"controller-6968d8fdc4-4lv4g\" (UID: \"a494a726-4ad9-4a6a-a91a-bd3a8865d1af\") " pod="metallb-system/controller-6968d8fdc4-4lv4g" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.803098 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a494a726-4ad9-4a6a-a91a-bd3a8865d1af-metrics-certs\") pod \"controller-6968d8fdc4-4lv4g\" (UID: \"a494a726-4ad9-4a6a-a91a-bd3a8865d1af\") " pod="metallb-system/controller-6968d8fdc4-4lv4g" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.803128 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/49d5f9df-c528-4fa0-bc0c-fba73c19add9-metallb-excludel2\") pod \"speaker-kbqgv\" (UID: \"49d5f9df-c528-4fa0-bc0c-fba73c19add9\") " pod="metallb-system/speaker-kbqgv" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.803195 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/49d5f9df-c528-4fa0-bc0c-fba73c19add9-memberlist\") pod \"speaker-kbqgv\" (UID: \"49d5f9df-c528-4fa0-bc0c-fba73c19add9\") " pod="metallb-system/speaker-kbqgv" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.803211 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49d5f9df-c528-4fa0-bc0c-fba73c19add9-metrics-certs\") pod \"speaker-kbqgv\" (UID: \"49d5f9df-c528-4fa0-bc0c-fba73c19add9\") " pod="metallb-system/speaker-kbqgv" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.803234 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z6pr\" (UniqueName: \"kubernetes.io/projected/49d5f9df-c528-4fa0-bc0c-fba73c19add9-kube-api-access-8z6pr\") pod \"speaker-kbqgv\" (UID: \"49d5f9df-c528-4fa0-bc0c-fba73c19add9\") " pod="metallb-system/speaker-kbqgv" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.804944 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.807250 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a494a726-4ad9-4a6a-a91a-bd3a8865d1af-metrics-certs\") pod \"controller-6968d8fdc4-4lv4g\" (UID: \"a494a726-4ad9-4a6a-a91a-bd3a8865d1af\") " pod="metallb-system/controller-6968d8fdc4-4lv4g" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.817658 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a494a726-4ad9-4a6a-a91a-bd3a8865d1af-cert\") pod \"controller-6968d8fdc4-4lv4g\" (UID: \"a494a726-4ad9-4a6a-a91a-bd3a8865d1af\") " pod="metallb-system/controller-6968d8fdc4-4lv4g" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.823923 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkzml\" (UniqueName: \"kubernetes.io/projected/a494a726-4ad9-4a6a-a91a-bd3a8865d1af-kube-api-access-fkzml\") pod \"controller-6968d8fdc4-4lv4g\" (UID: \"a494a726-4ad9-4a6a-a91a-bd3a8865d1af\") " pod="metallb-system/controller-6968d8fdc4-4lv4g" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.904676 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/49d5f9df-c528-4fa0-bc0c-fba73c19add9-memberlist\") pod \"speaker-kbqgv\" (UID: \"49d5f9df-c528-4fa0-bc0c-fba73c19add9\") " pod="metallb-system/speaker-kbqgv" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.904734 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49d5f9df-c528-4fa0-bc0c-fba73c19add9-metrics-certs\") pod \"speaker-kbqgv\" (UID: \"49d5f9df-c528-4fa0-bc0c-fba73c19add9\") " pod="metallb-system/speaker-kbqgv" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.904751 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z6pr\" (UniqueName: \"kubernetes.io/projected/49d5f9df-c528-4fa0-bc0c-fba73c19add9-kube-api-access-8z6pr\") pod \"speaker-kbqgv\" (UID: \"49d5f9df-c528-4fa0-bc0c-fba73c19add9\") " pod="metallb-system/speaker-kbqgv" Jan 27 10:06:26 crc kubenswrapper[4869]: E0127 10:06:26.904784 4869 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.904816 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/49d5f9df-c528-4fa0-bc0c-fba73c19add9-metallb-excludel2\") pod \"speaker-kbqgv\" (UID: \"49d5f9df-c528-4fa0-bc0c-fba73c19add9\") " pod="metallb-system/speaker-kbqgv" Jan 27 10:06:26 crc kubenswrapper[4869]: E0127 10:06:26.904862 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49d5f9df-c528-4fa0-bc0c-fba73c19add9-memberlist podName:49d5f9df-c528-4fa0-bc0c-fba73c19add9 nodeName:}" failed. No retries permitted until 2026-01-27 10:06:27.404844806 +0000 UTC m=+756.025268889 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/49d5f9df-c528-4fa0-bc0c-fba73c19add9-memberlist") pod "speaker-kbqgv" (UID: "49d5f9df-c528-4fa0-bc0c-fba73c19add9") : secret "metallb-memberlist" not found Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.905572 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/49d5f9df-c528-4fa0-bc0c-fba73c19add9-metallb-excludel2\") pod \"speaker-kbqgv\" (UID: \"49d5f9df-c528-4fa0-bc0c-fba73c19add9\") " pod="metallb-system/speaker-kbqgv" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.911192 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49d5f9df-c528-4fa0-bc0c-fba73c19add9-metrics-certs\") pod \"speaker-kbqgv\" (UID: \"49d5f9df-c528-4fa0-bc0c-fba73c19add9\") " pod="metallb-system/speaker-kbqgv" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.924460 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z6pr\" (UniqueName: \"kubernetes.io/projected/49d5f9df-c528-4fa0-bc0c-fba73c19add9-kube-api-access-8z6pr\") pod \"speaker-kbqgv\" (UID: \"49d5f9df-c528-4fa0-bc0c-fba73c19add9\") " pod="metallb-system/speaker-kbqgv" Jan 27 10:06:26 crc kubenswrapper[4869]: I0127 10:06:26.932655 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-4lv4g" Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.099925 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-4lv4g"] Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.207532 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96c9106e-9af3-468a-8a06-4fbc013ab6d1-metrics-certs\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.207611 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4558cbce-4dbb-4621-a880-674cc8ea8353-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vtdm4\" (UID: \"4558cbce-4dbb-4621-a880-674cc8ea8353\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vtdm4" Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.211759 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96c9106e-9af3-468a-8a06-4fbc013ab6d1-metrics-certs\") pod \"frr-k8s-b4fnc\" (UID: \"96c9106e-9af3-468a-8a06-4fbc013ab6d1\") " pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.211972 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4558cbce-4dbb-4621-a880-674cc8ea8353-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vtdm4\" (UID: \"4558cbce-4dbb-4621-a880-674cc8ea8353\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vtdm4" Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.344205 4869 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.409969 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/49d5f9df-c528-4fa0-bc0c-fba73c19add9-memberlist\") pod \"speaker-kbqgv\" (UID: \"49d5f9df-c528-4fa0-bc0c-fba73c19add9\") " pod="metallb-system/speaker-kbqgv" Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.416311 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/49d5f9df-c528-4fa0-bc0c-fba73c19add9-memberlist\") pod \"speaker-kbqgv\" (UID: \"49d5f9df-c528-4fa0-bc0c-fba73c19add9\") " pod="metallb-system/speaker-kbqgv" Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.434113 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.452952 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vtdm4" Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.524880 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kbqgv" Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.678665 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-vtdm4"] Jan 27 10:06:27 crc kubenswrapper[4869]: W0127 10:06:27.682302 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4558cbce_4dbb_4621_a880_674cc8ea8353.slice/crio-53079bc9171f31ef3371785ddf5f1290b1cc07d5ba2f6ac508d6e5e198d2350f WatchSource:0}: Error finding container 53079bc9171f31ef3371785ddf5f1290b1cc07d5ba2f6ac508d6e5e198d2350f: Status 404 returned error can't find the container with id 53079bc9171f31ef3371785ddf5f1290b1cc07d5ba2f6ac508d6e5e198d2350f Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.974891 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-4lv4g" event={"ID":"a494a726-4ad9-4a6a-a91a-bd3a8865d1af","Type":"ContainerStarted","Data":"0034abc1203b06dff13ade68011c8b19dd3cf8fd2a57d8033779ef1931da5e8a"} Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.975211 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-4lv4g" Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.975225 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-4lv4g" event={"ID":"a494a726-4ad9-4a6a-a91a-bd3a8865d1af","Type":"ContainerStarted","Data":"e0dab9aa6e7c8c6690b068185b723f1f71e22469fbb4b43969de1fb18d35e1be"} Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.975237 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-4lv4g" event={"ID":"a494a726-4ad9-4a6a-a91a-bd3a8865d1af","Type":"ContainerStarted","Data":"b41cfc92ae1e230210810506ffc2644e8c7c081cc6de4fb7e370414a182115d8"} Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.982651 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kbqgv" event={"ID":"49d5f9df-c528-4fa0-bc0c-fba73c19add9","Type":"ContainerStarted","Data":"85569e53b029ebcea7edfa17d9b82d1d7ce8aa5bbada01e36d78a90cf8581153"} Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.982704 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kbqgv" event={"ID":"49d5f9df-c528-4fa0-bc0c-fba73c19add9","Type":"ContainerStarted","Data":"1be96d2c26a21ac52b4948a3996dd2678f99dd524a6dfdf43a59e4b45f6ca5c8"} Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.984341 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vtdm4" event={"ID":"4558cbce-4dbb-4621-a880-674cc8ea8353","Type":"ContainerStarted","Data":"53079bc9171f31ef3371785ddf5f1290b1cc07d5ba2f6ac508d6e5e198d2350f"} Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.985599 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4fnc" event={"ID":"96c9106e-9af3-468a-8a06-4fbc013ab6d1","Type":"ContainerStarted","Data":"a5982cf023cb4c106075080d80a4b0029274f03d57f66bd742dbc041191ed4f4"} Jan 27 10:06:27 crc kubenswrapper[4869]: I0127 10:06:27.993524 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-4lv4g" podStartSLOduration=1.993508523 podStartE2EDuration="1.993508523s" podCreationTimestamp="2026-01-27 10:06:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:06:27.990727536 +0000 UTC m=+756.611151619" watchObservedRunningTime="2026-01-27 10:06:27.993508523 +0000 UTC m=+756.613932606" Jan 27 10:06:29 crc kubenswrapper[4869]: I0127 10:06:29.016582 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kbqgv" event={"ID":"49d5f9df-c528-4fa0-bc0c-fba73c19add9","Type":"ContainerStarted","Data":"be2fb2b73aa7fa31c7204599d13ece747dd24b80d867c4c77af153970ca1974c"} Jan 27 10:06:29 crc kubenswrapper[4869]: I0127 10:06:29.016647 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-kbqgv" Jan 27 10:06:32 crc kubenswrapper[4869]: I0127 10:06:32.052199 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-kbqgv" podStartSLOduration=6.052176041 podStartE2EDuration="6.052176041s" podCreationTimestamp="2026-01-27 10:06:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:06:29.044198105 +0000 UTC m=+757.664622188" watchObservedRunningTime="2026-01-27 10:06:32.052176041 +0000 UTC m=+760.672600124" Jan 27 10:06:35 crc kubenswrapper[4869]: I0127 10:06:35.075272 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vtdm4" event={"ID":"4558cbce-4dbb-4621-a880-674cc8ea8353","Type":"ContainerStarted","Data":"8f783abbe89b06867dacf1df8baa33102fa5777d414108014b236c8eaacb80ea"} Jan 27 10:06:35 crc kubenswrapper[4869]: I0127 10:06:35.075399 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vtdm4" Jan 27 10:06:35 crc kubenswrapper[4869]: I0127 10:06:35.077452 4869 generic.go:334] "Generic (PLEG): container finished" podID="96c9106e-9af3-468a-8a06-4fbc013ab6d1" containerID="f05341ac64d11d6a5ff47a0886e8d2f6ddea4e63036769a3a327d86dc2189dab" exitCode=0 Jan 27 10:06:35 crc kubenswrapper[4869]: I0127 10:06:35.077548 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4fnc" event={"ID":"96c9106e-9af3-468a-8a06-4fbc013ab6d1","Type":"ContainerDied","Data":"f05341ac64d11d6a5ff47a0886e8d2f6ddea4e63036769a3a327d86dc2189dab"} Jan 27 10:06:35 crc kubenswrapper[4869]: I0127 10:06:35.090600 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vtdm4" podStartSLOduration=2.373845568 podStartE2EDuration="9.090584117s" podCreationTimestamp="2026-01-27 10:06:26 +0000 UTC" firstStartedPulling="2026-01-27 10:06:27.685047237 +0000 UTC m=+756.305471320" lastFinishedPulling="2026-01-27 10:06:34.401785796 +0000 UTC m=+763.022209869" observedRunningTime="2026-01-27 10:06:35.08967938 +0000 UTC m=+763.710103503" watchObservedRunningTime="2026-01-27 10:06:35.090584117 +0000 UTC m=+763.711008200" Jan 27 10:06:36 crc kubenswrapper[4869]: I0127 10:06:36.086105 4869 generic.go:334] "Generic (PLEG): container finished" podID="96c9106e-9af3-468a-8a06-4fbc013ab6d1" containerID="01623414df7c8cc969f8505020b37390646753ac0da6578e14ab61b1f884849f" exitCode=0 Jan 27 10:06:36 crc kubenswrapper[4869]: I0127 10:06:36.086162 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4fnc" event={"ID":"96c9106e-9af3-468a-8a06-4fbc013ab6d1","Type":"ContainerDied","Data":"01623414df7c8cc969f8505020b37390646753ac0da6578e14ab61b1f884849f"} Jan 27 10:06:37 crc kubenswrapper[4869]: I0127 10:06:37.094670 4869 generic.go:334] "Generic (PLEG): container finished" podID="96c9106e-9af3-468a-8a06-4fbc013ab6d1" containerID="cbaaea6dc26f57247bfa98d12477d80989e6c3c75d30cfa6ddb780b11e6d5eb7" exitCode=0 Jan 27 10:06:37 crc kubenswrapper[4869]: I0127 10:06:37.094711 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4fnc" event={"ID":"96c9106e-9af3-468a-8a06-4fbc013ab6d1","Type":"ContainerDied","Data":"cbaaea6dc26f57247bfa98d12477d80989e6c3c75d30cfa6ddb780b11e6d5eb7"} Jan 27 10:06:37 crc kubenswrapper[4869]: I0127 10:06:37.529160 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-kbqgv" Jan 27 10:06:38 crc kubenswrapper[4869]: I0127 10:06:38.105616 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4fnc" event={"ID":"96c9106e-9af3-468a-8a06-4fbc013ab6d1","Type":"ContainerStarted","Data":"319e00f1d1ebcb55ff35578b2fdf96ea754224bbe1a4a8647c146e2b0d79d334"} Jan 27 10:06:38 crc kubenswrapper[4869]: I0127 10:06:38.105661 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4fnc" event={"ID":"96c9106e-9af3-468a-8a06-4fbc013ab6d1","Type":"ContainerStarted","Data":"70b15a000addeb8e33893c6f5683bb60a631fa7b22847d259df7dbe3bbf43013"} Jan 27 10:06:38 crc kubenswrapper[4869]: I0127 10:06:38.105670 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4fnc" event={"ID":"96c9106e-9af3-468a-8a06-4fbc013ab6d1","Type":"ContainerStarted","Data":"85d4114a828261a3aeeb33c5f07db412f4e369946fc55975df755b93e4fe86cc"} Jan 27 10:06:38 crc kubenswrapper[4869]: I0127 10:06:38.105707 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4fnc" event={"ID":"96c9106e-9af3-468a-8a06-4fbc013ab6d1","Type":"ContainerStarted","Data":"b7405d49d37f3ad2a23135cbd1e7af077ec0b116c49a637d8912ef34050b1e9b"} Jan 27 10:06:38 crc kubenswrapper[4869]: I0127 10:06:38.105716 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4fnc" event={"ID":"96c9106e-9af3-468a-8a06-4fbc013ab6d1","Type":"ContainerStarted","Data":"f5eec0d036706b85e8f906855d721e5bee4842febcc0fe83faf8d2c0ffc951ef"} Jan 27 10:06:39 crc kubenswrapper[4869]: I0127 10:06:39.117022 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b4fnc" event={"ID":"96c9106e-9af3-468a-8a06-4fbc013ab6d1","Type":"ContainerStarted","Data":"b689643c2b6b850b8e50bfae4c623600d0d18cec103ab1f4e6fd55aeab16eced"} Jan 27 10:06:39 crc kubenswrapper[4869]: I0127 10:06:39.117920 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:39 crc kubenswrapper[4869]: I0127 10:06:39.139926 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-b4fnc" podStartSLOduration=6.301332898 podStartE2EDuration="13.139907358s" podCreationTimestamp="2026-01-27 10:06:26 +0000 UTC" firstStartedPulling="2026-01-27 10:06:27.536297802 +0000 UTC m=+756.156721885" lastFinishedPulling="2026-01-27 10:06:34.374872262 +0000 UTC m=+762.995296345" observedRunningTime="2026-01-27 10:06:39.138857186 +0000 UTC m=+767.759281299" watchObservedRunningTime="2026-01-27 10:06:39.139907358 +0000 UTC m=+767.760331441" Jan 27 10:06:40 crc kubenswrapper[4869]: I0127 10:06:40.812769 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9fwt9"] Jan 27 10:06:40 crc kubenswrapper[4869]: I0127 10:06:40.813510 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9fwt9" Jan 27 10:06:40 crc kubenswrapper[4869]: I0127 10:06:40.819148 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 10:06:40 crc kubenswrapper[4869]: I0127 10:06:40.819547 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-ckp8p" Jan 27 10:06:40 crc kubenswrapper[4869]: I0127 10:06:40.820104 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 10:06:40 crc kubenswrapper[4869]: I0127 10:06:40.829281 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9fwt9"] Jan 27 10:06:40 crc kubenswrapper[4869]: I0127 10:06:40.922118 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4ksj\" (UniqueName: \"kubernetes.io/projected/40f0d7ca-791c-4b75-94b7-eac11e29ca55-kube-api-access-q4ksj\") pod \"openstack-operator-index-9fwt9\" (UID: \"40f0d7ca-791c-4b75-94b7-eac11e29ca55\") " pod="openstack-operators/openstack-operator-index-9fwt9" Jan 27 10:06:41 crc kubenswrapper[4869]: I0127 10:06:41.023932 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4ksj\" (UniqueName: \"kubernetes.io/projected/40f0d7ca-791c-4b75-94b7-eac11e29ca55-kube-api-access-q4ksj\") pod \"openstack-operator-index-9fwt9\" (UID: \"40f0d7ca-791c-4b75-94b7-eac11e29ca55\") " pod="openstack-operators/openstack-operator-index-9fwt9" Jan 27 10:06:41 crc kubenswrapper[4869]: I0127 10:06:41.043815 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4ksj\" (UniqueName: \"kubernetes.io/projected/40f0d7ca-791c-4b75-94b7-eac11e29ca55-kube-api-access-q4ksj\") pod \"openstack-operator-index-9fwt9\" (UID: \"40f0d7ca-791c-4b75-94b7-eac11e29ca55\") " pod="openstack-operators/openstack-operator-index-9fwt9" Jan 27 10:06:41 crc kubenswrapper[4869]: I0127 10:06:41.134042 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9fwt9" Jan 27 10:06:41 crc kubenswrapper[4869]: I0127 10:06:41.343654 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9fwt9"] Jan 27 10:06:41 crc kubenswrapper[4869]: W0127 10:06:41.350187 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40f0d7ca_791c_4b75_94b7_eac11e29ca55.slice/crio-3a01b434e15b4ea1ff12c6cbd0529fe05e734b07958c981033fac735e7ac8dbc WatchSource:0}: Error finding container 3a01b434e15b4ea1ff12c6cbd0529fe05e734b07958c981033fac735e7ac8dbc: Status 404 returned error can't find the container with id 3a01b434e15b4ea1ff12c6cbd0529fe05e734b07958c981033fac735e7ac8dbc Jan 27 10:06:42 crc kubenswrapper[4869]: I0127 10:06:42.138417 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9fwt9" event={"ID":"40f0d7ca-791c-4b75-94b7-eac11e29ca55","Type":"ContainerStarted","Data":"3a01b434e15b4ea1ff12c6cbd0529fe05e734b07958c981033fac735e7ac8dbc"} Jan 27 10:06:42 crc kubenswrapper[4869]: I0127 10:06:42.435213 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:42 crc kubenswrapper[4869]: I0127 10:06:42.478401 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:43 crc kubenswrapper[4869]: I0127 10:06:43.990143 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9fwt9"] Jan 27 10:06:44 crc kubenswrapper[4869]: I0127 10:06:44.153691 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9fwt9" event={"ID":"40f0d7ca-791c-4b75-94b7-eac11e29ca55","Type":"ContainerStarted","Data":"830b758d798c25d9914eb0838d920d6ef0914d72799c4da3645c9b8f4710c2df"} Jan 27 10:06:44 crc kubenswrapper[4869]: I0127 10:06:44.183864 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9fwt9" podStartSLOduration=2.459132063 podStartE2EDuration="4.183806614s" podCreationTimestamp="2026-01-27 10:06:40 +0000 UTC" firstStartedPulling="2026-01-27 10:06:41.353187469 +0000 UTC m=+769.973611552" lastFinishedPulling="2026-01-27 10:06:43.07786202 +0000 UTC m=+771.698286103" observedRunningTime="2026-01-27 10:06:44.177322666 +0000 UTC m=+772.797746799" watchObservedRunningTime="2026-01-27 10:06:44.183806614 +0000 UTC m=+772.804230737" Jan 27 10:06:44 crc kubenswrapper[4869]: I0127 10:06:44.596346 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-l45ns"] Jan 27 10:06:44 crc kubenswrapper[4869]: I0127 10:06:44.597313 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l45ns" Jan 27 10:06:44 crc kubenswrapper[4869]: I0127 10:06:44.606504 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-l45ns"] Jan 27 10:06:44 crc kubenswrapper[4869]: I0127 10:06:44.689522 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtl8g\" (UniqueName: \"kubernetes.io/projected/88ac0b56-ccfe-4c0a-b0eb-b56d1c6ef0fa-kube-api-access-jtl8g\") pod \"openstack-operator-index-l45ns\" (UID: \"88ac0b56-ccfe-4c0a-b0eb-b56d1c6ef0fa\") " pod="openstack-operators/openstack-operator-index-l45ns" Jan 27 10:06:44 crc kubenswrapper[4869]: I0127 10:06:44.790819 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtl8g\" (UniqueName: \"kubernetes.io/projected/88ac0b56-ccfe-4c0a-b0eb-b56d1c6ef0fa-kube-api-access-jtl8g\") pod \"openstack-operator-index-l45ns\" (UID: \"88ac0b56-ccfe-4c0a-b0eb-b56d1c6ef0fa\") " pod="openstack-operators/openstack-operator-index-l45ns" Jan 27 10:06:44 crc kubenswrapper[4869]: I0127 10:06:44.823300 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtl8g\" (UniqueName: \"kubernetes.io/projected/88ac0b56-ccfe-4c0a-b0eb-b56d1c6ef0fa-kube-api-access-jtl8g\") pod \"openstack-operator-index-l45ns\" (UID: \"88ac0b56-ccfe-4c0a-b0eb-b56d1c6ef0fa\") " pod="openstack-operators/openstack-operator-index-l45ns" Jan 27 10:06:44 crc kubenswrapper[4869]: I0127 10:06:44.918686 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l45ns" Jan 27 10:06:45 crc kubenswrapper[4869]: I0127 10:06:45.128882 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-l45ns"] Jan 27 10:06:45 crc kubenswrapper[4869]: W0127 10:06:45.132030 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88ac0b56_ccfe_4c0a_b0eb_b56d1c6ef0fa.slice/crio-eb32eb8f192427fd1bb277e7724b908c53663f6dd6215e9b894fef163e80880d WatchSource:0}: Error finding container eb32eb8f192427fd1bb277e7724b908c53663f6dd6215e9b894fef163e80880d: Status 404 returned error can't find the container with id eb32eb8f192427fd1bb277e7724b908c53663f6dd6215e9b894fef163e80880d Jan 27 10:06:45 crc kubenswrapper[4869]: I0127 10:06:45.160405 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l45ns" event={"ID":"88ac0b56-ccfe-4c0a-b0eb-b56d1c6ef0fa","Type":"ContainerStarted","Data":"eb32eb8f192427fd1bb277e7724b908c53663f6dd6215e9b894fef163e80880d"} Jan 27 10:06:45 crc kubenswrapper[4869]: I0127 10:06:45.160520 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-9fwt9" podUID="40f0d7ca-791c-4b75-94b7-eac11e29ca55" containerName="registry-server" containerID="cri-o://830b758d798c25d9914eb0838d920d6ef0914d72799c4da3645c9b8f4710c2df" gracePeriod=2 Jan 27 10:06:45 crc kubenswrapper[4869]: I0127 10:06:45.449856 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9fwt9" Jan 27 10:06:45 crc kubenswrapper[4869]: I0127 10:06:45.602079 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4ksj\" (UniqueName: \"kubernetes.io/projected/40f0d7ca-791c-4b75-94b7-eac11e29ca55-kube-api-access-q4ksj\") pod \"40f0d7ca-791c-4b75-94b7-eac11e29ca55\" (UID: \"40f0d7ca-791c-4b75-94b7-eac11e29ca55\") " Jan 27 10:06:45 crc kubenswrapper[4869]: I0127 10:06:45.609858 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f0d7ca-791c-4b75-94b7-eac11e29ca55-kube-api-access-q4ksj" (OuterVolumeSpecName: "kube-api-access-q4ksj") pod "40f0d7ca-791c-4b75-94b7-eac11e29ca55" (UID: "40f0d7ca-791c-4b75-94b7-eac11e29ca55"). InnerVolumeSpecName "kube-api-access-q4ksj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:06:45 crc kubenswrapper[4869]: I0127 10:06:45.697870 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:06:45 crc kubenswrapper[4869]: I0127 10:06:45.697921 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:06:45 crc kubenswrapper[4869]: I0127 10:06:45.697958 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 10:06:45 crc kubenswrapper[4869]: I0127 10:06:45.698539 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4a99f8d4039d41e36670df28e70519808f43f55b1ba2158821f11696774fdec4"} pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:06:45 crc kubenswrapper[4869]: I0127 10:06:45.698587 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" containerID="cri-o://4a99f8d4039d41e36670df28e70519808f43f55b1ba2158821f11696774fdec4" gracePeriod=600 Jan 27 10:06:45 crc kubenswrapper[4869]: I0127 10:06:45.704171 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4ksj\" (UniqueName: \"kubernetes.io/projected/40f0d7ca-791c-4b75-94b7-eac11e29ca55-kube-api-access-q4ksj\") on node \"crc\" DevicePath \"\"" Jan 27 10:06:46 crc kubenswrapper[4869]: I0127 10:06:46.170380 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l45ns" event={"ID":"88ac0b56-ccfe-4c0a-b0eb-b56d1c6ef0fa","Type":"ContainerStarted","Data":"a9a0e134da15e3cfcc4a9166830d502aa31992f54eed977cd4d59012855321e7"} Jan 27 10:06:46 crc kubenswrapper[4869]: I0127 10:06:46.174137 4869 generic.go:334] "Generic (PLEG): container finished" podID="40f0d7ca-791c-4b75-94b7-eac11e29ca55" containerID="830b758d798c25d9914eb0838d920d6ef0914d72799c4da3645c9b8f4710c2df" exitCode=0 Jan 27 10:06:46 crc kubenswrapper[4869]: I0127 10:06:46.174168 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9fwt9" event={"ID":"40f0d7ca-791c-4b75-94b7-eac11e29ca55","Type":"ContainerDied","Data":"830b758d798c25d9914eb0838d920d6ef0914d72799c4da3645c9b8f4710c2df"} Jan 27 10:06:46 crc kubenswrapper[4869]: I0127 10:06:46.174197 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9fwt9" event={"ID":"40f0d7ca-791c-4b75-94b7-eac11e29ca55","Type":"ContainerDied","Data":"3a01b434e15b4ea1ff12c6cbd0529fe05e734b07958c981033fac735e7ac8dbc"} Jan 27 10:06:46 crc kubenswrapper[4869]: I0127 10:06:46.174214 4869 scope.go:117] "RemoveContainer" containerID="830b758d798c25d9914eb0838d920d6ef0914d72799c4da3645c9b8f4710c2df" Jan 27 10:06:46 crc kubenswrapper[4869]: I0127 10:06:46.174261 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9fwt9" Jan 27 10:06:46 crc kubenswrapper[4869]: I0127 10:06:46.180718 4869 generic.go:334] "Generic (PLEG): container finished" podID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerID="4a99f8d4039d41e36670df28e70519808f43f55b1ba2158821f11696774fdec4" exitCode=0 Jan 27 10:06:46 crc kubenswrapper[4869]: I0127 10:06:46.180763 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerDied","Data":"4a99f8d4039d41e36670df28e70519808f43f55b1ba2158821f11696774fdec4"} Jan 27 10:06:46 crc kubenswrapper[4869]: I0127 10:06:46.180788 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerStarted","Data":"dc8a6d1fdbc6b3f8427a05417ce1783a27aac64b6b76b4051c7a781e964cbb0b"} Jan 27 10:06:46 crc kubenswrapper[4869]: I0127 10:06:46.197681 4869 scope.go:117] "RemoveContainer" containerID="830b758d798c25d9914eb0838d920d6ef0914d72799c4da3645c9b8f4710c2df" Jan 27 10:06:46 crc kubenswrapper[4869]: E0127 10:06:46.198336 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"830b758d798c25d9914eb0838d920d6ef0914d72799c4da3645c9b8f4710c2df\": container with ID starting with 830b758d798c25d9914eb0838d920d6ef0914d72799c4da3645c9b8f4710c2df not found: ID does not exist" containerID="830b758d798c25d9914eb0838d920d6ef0914d72799c4da3645c9b8f4710c2df" Jan 27 10:06:46 crc kubenswrapper[4869]: I0127 10:06:46.198387 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"830b758d798c25d9914eb0838d920d6ef0914d72799c4da3645c9b8f4710c2df"} err="failed to get container status \"830b758d798c25d9914eb0838d920d6ef0914d72799c4da3645c9b8f4710c2df\": rpc error: code = NotFound desc = could not find container \"830b758d798c25d9914eb0838d920d6ef0914d72799c4da3645c9b8f4710c2df\": container with ID starting with 830b758d798c25d9914eb0838d920d6ef0914d72799c4da3645c9b8f4710c2df not found: ID does not exist" Jan 27 10:06:46 crc kubenswrapper[4869]: I0127 10:06:46.198419 4869 scope.go:117] "RemoveContainer" containerID="e4e1f681e75097eec891089999a26357ca0f39f6b81a768157a4ab35694ce21e" Jan 27 10:06:46 crc kubenswrapper[4869]: I0127 10:06:46.213652 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-l45ns" podStartSLOduration=2.157669955 podStartE2EDuration="2.213628658s" podCreationTimestamp="2026-01-27 10:06:44 +0000 UTC" firstStartedPulling="2026-01-27 10:06:45.137165407 +0000 UTC m=+773.757589490" lastFinishedPulling="2026-01-27 10:06:45.19312411 +0000 UTC m=+773.813548193" observedRunningTime="2026-01-27 10:06:46.190088708 +0000 UTC m=+774.810512791" watchObservedRunningTime="2026-01-27 10:06:46.213628658 +0000 UTC m=+774.834052731" Jan 27 10:06:46 crc kubenswrapper[4869]: I0127 10:06:46.222092 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9fwt9"] Jan 27 10:06:46 crc kubenswrapper[4869]: I0127 10:06:46.227501 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-9fwt9"] Jan 27 10:06:46 crc kubenswrapper[4869]: I0127 10:06:46.937771 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-4lv4g" Jan 27 10:06:47 crc kubenswrapper[4869]: I0127 10:06:47.435845 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-b4fnc" Jan 27 10:06:47 crc kubenswrapper[4869]: I0127 10:06:47.462599 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vtdm4" Jan 27 10:06:48 crc kubenswrapper[4869]: I0127 10:06:48.044684 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f0d7ca-791c-4b75-94b7-eac11e29ca55" path="/var/lib/kubelet/pods/40f0d7ca-791c-4b75-94b7-eac11e29ca55/volumes" Jan 27 10:06:54 crc kubenswrapper[4869]: I0127 10:06:54.919908 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-l45ns" Jan 27 10:06:54 crc kubenswrapper[4869]: I0127 10:06:54.920656 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-l45ns" Jan 27 10:06:54 crc kubenswrapper[4869]: I0127 10:06:54.948590 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-l45ns" Jan 27 10:06:55 crc kubenswrapper[4869]: I0127 10:06:55.283068 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-l45ns" Jan 27 10:07:00 crc kubenswrapper[4869]: I0127 10:07:00.646745 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4"] Jan 27 10:07:00 crc kubenswrapper[4869]: E0127 10:07:00.647416 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f0d7ca-791c-4b75-94b7-eac11e29ca55" containerName="registry-server" Jan 27 10:07:00 crc kubenswrapper[4869]: I0127 10:07:00.647437 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f0d7ca-791c-4b75-94b7-eac11e29ca55" containerName="registry-server" Jan 27 10:07:00 crc kubenswrapper[4869]: I0127 10:07:00.647635 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f0d7ca-791c-4b75-94b7-eac11e29ca55" containerName="registry-server" Jan 27 10:07:00 crc kubenswrapper[4869]: I0127 10:07:00.649173 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" Jan 27 10:07:00 crc kubenswrapper[4869]: I0127 10:07:00.653005 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xcglw" Jan 27 10:07:00 crc kubenswrapper[4869]: I0127 10:07:00.657460 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4"] Jan 27 10:07:00 crc kubenswrapper[4869]: I0127 10:07:00.813126 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3dd40da-058c-45a7-89be-624d27129825-util\") pod \"44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4\" (UID: \"c3dd40da-058c-45a7-89be-624d27129825\") " pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" Jan 27 10:07:00 crc kubenswrapper[4869]: I0127 10:07:00.813197 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdnpq\" (UniqueName: \"kubernetes.io/projected/c3dd40da-058c-45a7-89be-624d27129825-kube-api-access-cdnpq\") pod \"44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4\" (UID: \"c3dd40da-058c-45a7-89be-624d27129825\") " pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" Jan 27 10:07:00 crc kubenswrapper[4869]: I0127 10:07:00.813236 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3dd40da-058c-45a7-89be-624d27129825-bundle\") pod \"44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4\" (UID: \"c3dd40da-058c-45a7-89be-624d27129825\") " pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" Jan 27 10:07:00 crc kubenswrapper[4869]: I0127 10:07:00.914798 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3dd40da-058c-45a7-89be-624d27129825-util\") pod \"44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4\" (UID: \"c3dd40da-058c-45a7-89be-624d27129825\") " pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" Jan 27 10:07:00 crc kubenswrapper[4869]: I0127 10:07:00.914958 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdnpq\" (UniqueName: \"kubernetes.io/projected/c3dd40da-058c-45a7-89be-624d27129825-kube-api-access-cdnpq\") pod \"44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4\" (UID: \"c3dd40da-058c-45a7-89be-624d27129825\") " pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" Jan 27 10:07:00 crc kubenswrapper[4869]: I0127 10:07:00.915019 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3dd40da-058c-45a7-89be-624d27129825-bundle\") pod \"44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4\" (UID: \"c3dd40da-058c-45a7-89be-624d27129825\") " pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" Jan 27 10:07:00 crc kubenswrapper[4869]: I0127 10:07:00.915299 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3dd40da-058c-45a7-89be-624d27129825-util\") pod \"44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4\" (UID: \"c3dd40da-058c-45a7-89be-624d27129825\") " pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" Jan 27 10:07:00 crc kubenswrapper[4869]: I0127 10:07:00.915713 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3dd40da-058c-45a7-89be-624d27129825-bundle\") pod \"44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4\" (UID: \"c3dd40da-058c-45a7-89be-624d27129825\") " pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" Jan 27 10:07:00 crc kubenswrapper[4869]: I0127 10:07:00.951376 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdnpq\" (UniqueName: \"kubernetes.io/projected/c3dd40da-058c-45a7-89be-624d27129825-kube-api-access-cdnpq\") pod \"44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4\" (UID: \"c3dd40da-058c-45a7-89be-624d27129825\") " pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" Jan 27 10:07:00 crc kubenswrapper[4869]: I0127 10:07:00.995524 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" Jan 27 10:07:01 crc kubenswrapper[4869]: I0127 10:07:01.243276 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4"] Jan 27 10:07:01 crc kubenswrapper[4869]: I0127 10:07:01.293013 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" event={"ID":"c3dd40da-058c-45a7-89be-624d27129825","Type":"ContainerStarted","Data":"3fa41a1c0d86617a8686557cf8bb4d476074e86dec4247b731fde6b4f7ddc724"} Jan 27 10:07:02 crc kubenswrapper[4869]: I0127 10:07:02.301157 4869 generic.go:334] "Generic (PLEG): container finished" podID="c3dd40da-058c-45a7-89be-624d27129825" containerID="c602b69a0678bf0da3143973750b479f581e49da553a89c238b630e46bf89df9" exitCode=0 Jan 27 10:07:02 crc kubenswrapper[4869]: I0127 10:07:02.301260 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" event={"ID":"c3dd40da-058c-45a7-89be-624d27129825","Type":"ContainerDied","Data":"c602b69a0678bf0da3143973750b479f581e49da553a89c238b630e46bf89df9"} Jan 27 10:07:03 crc kubenswrapper[4869]: I0127 10:07:03.312426 4869 generic.go:334] "Generic (PLEG): container finished" podID="c3dd40da-058c-45a7-89be-624d27129825" containerID="8ef0dab3542fbc8eae1fba3739e9a985fe132eab6a73f07402cef73510e8f261" exitCode=0 Jan 27 10:07:03 crc kubenswrapper[4869]: I0127 10:07:03.312519 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" event={"ID":"c3dd40da-058c-45a7-89be-624d27129825","Type":"ContainerDied","Data":"8ef0dab3542fbc8eae1fba3739e9a985fe132eab6a73f07402cef73510e8f261"} Jan 27 10:07:04 crc kubenswrapper[4869]: I0127 10:07:04.324053 4869 generic.go:334] "Generic (PLEG): container finished" podID="c3dd40da-058c-45a7-89be-624d27129825" containerID="39e04a20a5218ea170e85f825b559e892e087a6d3af507b86faf3878f70e9933" exitCode=0 Jan 27 10:07:04 crc kubenswrapper[4869]: I0127 10:07:04.324475 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" event={"ID":"c3dd40da-058c-45a7-89be-624d27129825","Type":"ContainerDied","Data":"39e04a20a5218ea170e85f825b559e892e087a6d3af507b86faf3878f70e9933"} Jan 27 10:07:05 crc kubenswrapper[4869]: I0127 10:07:05.645578 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" Jan 27 10:07:05 crc kubenswrapper[4869]: I0127 10:07:05.791603 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3dd40da-058c-45a7-89be-624d27129825-util\") pod \"c3dd40da-058c-45a7-89be-624d27129825\" (UID: \"c3dd40da-058c-45a7-89be-624d27129825\") " Jan 27 10:07:05 crc kubenswrapper[4869]: I0127 10:07:05.791652 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3dd40da-058c-45a7-89be-624d27129825-bundle\") pod \"c3dd40da-058c-45a7-89be-624d27129825\" (UID: \"c3dd40da-058c-45a7-89be-624d27129825\") " Jan 27 10:07:05 crc kubenswrapper[4869]: I0127 10:07:05.791803 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdnpq\" (UniqueName: \"kubernetes.io/projected/c3dd40da-058c-45a7-89be-624d27129825-kube-api-access-cdnpq\") pod \"c3dd40da-058c-45a7-89be-624d27129825\" (UID: \"c3dd40da-058c-45a7-89be-624d27129825\") " Jan 27 10:07:05 crc kubenswrapper[4869]: I0127 10:07:05.792705 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3dd40da-058c-45a7-89be-624d27129825-bundle" (OuterVolumeSpecName: "bundle") pod "c3dd40da-058c-45a7-89be-624d27129825" (UID: "c3dd40da-058c-45a7-89be-624d27129825"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:07:05 crc kubenswrapper[4869]: I0127 10:07:05.800050 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3dd40da-058c-45a7-89be-624d27129825-kube-api-access-cdnpq" (OuterVolumeSpecName: "kube-api-access-cdnpq") pod "c3dd40da-058c-45a7-89be-624d27129825" (UID: "c3dd40da-058c-45a7-89be-624d27129825"). InnerVolumeSpecName "kube-api-access-cdnpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:07:05 crc kubenswrapper[4869]: I0127 10:07:05.806511 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3dd40da-058c-45a7-89be-624d27129825-util" (OuterVolumeSpecName: "util") pod "c3dd40da-058c-45a7-89be-624d27129825" (UID: "c3dd40da-058c-45a7-89be-624d27129825"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:07:05 crc kubenswrapper[4869]: I0127 10:07:05.893608 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3dd40da-058c-45a7-89be-624d27129825-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 10:07:05 crc kubenswrapper[4869]: I0127 10:07:05.893662 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdnpq\" (UniqueName: \"kubernetes.io/projected/c3dd40da-058c-45a7-89be-624d27129825-kube-api-access-cdnpq\") on node \"crc\" DevicePath \"\"" Jan 27 10:07:05 crc kubenswrapper[4869]: I0127 10:07:05.893673 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3dd40da-058c-45a7-89be-624d27129825-util\") on node \"crc\" DevicePath \"\"" Jan 27 10:07:06 crc kubenswrapper[4869]: I0127 10:07:06.354376 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" Jan 27 10:07:06 crc kubenswrapper[4869]: I0127 10:07:06.354331 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4" event={"ID":"c3dd40da-058c-45a7-89be-624d27129825","Type":"ContainerDied","Data":"3fa41a1c0d86617a8686557cf8bb4d476074e86dec4247b731fde6b4f7ddc724"} Jan 27 10:07:06 crc kubenswrapper[4869]: I0127 10:07:06.355451 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fa41a1c0d86617a8686557cf8bb4d476074e86dec4247b731fde6b4f7ddc724" Jan 27 10:07:07 crc kubenswrapper[4869]: I0127 10:07:07.969519 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-866665f5dd-q6mmh"] Jan 27 10:07:07 crc kubenswrapper[4869]: E0127 10:07:07.969771 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3dd40da-058c-45a7-89be-624d27129825" containerName="util" Jan 27 10:07:07 crc kubenswrapper[4869]: I0127 10:07:07.969784 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3dd40da-058c-45a7-89be-624d27129825" containerName="util" Jan 27 10:07:07 crc kubenswrapper[4869]: E0127 10:07:07.969791 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3dd40da-058c-45a7-89be-624d27129825" containerName="extract" Jan 27 10:07:07 crc kubenswrapper[4869]: I0127 10:07:07.969797 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3dd40da-058c-45a7-89be-624d27129825" containerName="extract" Jan 27 10:07:07 crc kubenswrapper[4869]: E0127 10:07:07.969812 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3dd40da-058c-45a7-89be-624d27129825" containerName="pull" Jan 27 10:07:07 crc kubenswrapper[4869]: I0127 10:07:07.969818 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3dd40da-058c-45a7-89be-624d27129825" containerName="pull" Jan 27 10:07:07 crc kubenswrapper[4869]: I0127 10:07:07.969947 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3dd40da-058c-45a7-89be-624d27129825" containerName="extract" Jan 27 10:07:07 crc kubenswrapper[4869]: I0127 10:07:07.970330 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-866665f5dd-q6mmh" Jan 27 10:07:07 crc kubenswrapper[4869]: I0127 10:07:07.972599 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-gvz25" Jan 27 10:07:07 crc kubenswrapper[4869]: I0127 10:07:07.998511 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-866665f5dd-q6mmh"] Jan 27 10:07:08 crc kubenswrapper[4869]: I0127 10:07:08.020991 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r62pm\" (UniqueName: \"kubernetes.io/projected/bcf2849f-1329-4523-83d0-4ad8ec004ce1-kube-api-access-r62pm\") pod \"openstack-operator-controller-init-866665f5dd-q6mmh\" (UID: \"bcf2849f-1329-4523-83d0-4ad8ec004ce1\") " pod="openstack-operators/openstack-operator-controller-init-866665f5dd-q6mmh" Jan 27 10:07:08 crc kubenswrapper[4869]: I0127 10:07:08.121809 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r62pm\" (UniqueName: \"kubernetes.io/projected/bcf2849f-1329-4523-83d0-4ad8ec004ce1-kube-api-access-r62pm\") pod \"openstack-operator-controller-init-866665f5dd-q6mmh\" (UID: \"bcf2849f-1329-4523-83d0-4ad8ec004ce1\") " pod="openstack-operators/openstack-operator-controller-init-866665f5dd-q6mmh" Jan 27 10:07:08 crc kubenswrapper[4869]: I0127 10:07:08.143662 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r62pm\" (UniqueName: \"kubernetes.io/projected/bcf2849f-1329-4523-83d0-4ad8ec004ce1-kube-api-access-r62pm\") pod \"openstack-operator-controller-init-866665f5dd-q6mmh\" (UID: \"bcf2849f-1329-4523-83d0-4ad8ec004ce1\") " pod="openstack-operators/openstack-operator-controller-init-866665f5dd-q6mmh" Jan 27 10:07:08 crc kubenswrapper[4869]: I0127 10:07:08.286325 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-866665f5dd-q6mmh" Jan 27 10:07:08 crc kubenswrapper[4869]: I0127 10:07:08.530183 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-866665f5dd-q6mmh"] Jan 27 10:07:08 crc kubenswrapper[4869]: W0127 10:07:08.549940 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcf2849f_1329_4523_83d0_4ad8ec004ce1.slice/crio-618bcb68a55f3e34f4aadeb4ad994c6695c33f96a47f75a28e68aecbd1eb143b WatchSource:0}: Error finding container 618bcb68a55f3e34f4aadeb4ad994c6695c33f96a47f75a28e68aecbd1eb143b: Status 404 returned error can't find the container with id 618bcb68a55f3e34f4aadeb4ad994c6695c33f96a47f75a28e68aecbd1eb143b Jan 27 10:07:09 crc kubenswrapper[4869]: I0127 10:07:09.374130 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-866665f5dd-q6mmh" event={"ID":"bcf2849f-1329-4523-83d0-4ad8ec004ce1","Type":"ContainerStarted","Data":"618bcb68a55f3e34f4aadeb4ad994c6695c33f96a47f75a28e68aecbd1eb143b"} Jan 27 10:07:12 crc kubenswrapper[4869]: I0127 10:07:12.393539 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-866665f5dd-q6mmh" event={"ID":"bcf2849f-1329-4523-83d0-4ad8ec004ce1","Type":"ContainerStarted","Data":"f055f4637d5a505084da6342ae73893a54484aabb30d38094ac0f15217db6a18"} Jan 27 10:07:12 crc kubenswrapper[4869]: I0127 10:07:12.394267 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-866665f5dd-q6mmh" Jan 27 10:07:12 crc kubenswrapper[4869]: I0127 10:07:12.443724 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-866665f5dd-q6mmh" podStartSLOduration=2.019907437 podStartE2EDuration="5.44369719s" podCreationTimestamp="2026-01-27 10:07:07 +0000 UTC" firstStartedPulling="2026-01-27 10:07:08.559289084 +0000 UTC m=+797.179713167" lastFinishedPulling="2026-01-27 10:07:11.983078837 +0000 UTC m=+800.603502920" observedRunningTime="2026-01-27 10:07:12.434069943 +0000 UTC m=+801.054494036" watchObservedRunningTime="2026-01-27 10:07:12.44369719 +0000 UTC m=+801.064121513" Jan 27 10:07:18 crc kubenswrapper[4869]: I0127 10:07:18.300302 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-866665f5dd-q6mmh" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.084479 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-7p2lg"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.085895 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7p2lg" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.087233 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-sztj2" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.093723 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7tzv4"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.094673 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7tzv4" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.096463 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jzfxm" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.104076 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-g5vdg"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.104817 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g5vdg" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.106549 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-s8n69" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.111765 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-7p2lg"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.134027 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7tzv4"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.143514 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnrgv\" (UniqueName: \"kubernetes.io/projected/b8e9717b-6786-4882-99ae-bbcaa887e310-kube-api-access-lnrgv\") pod \"barbican-operator-controller-manager-7f86f8796f-7p2lg\" (UID: \"b8e9717b-6786-4882-99ae-bbcaa887e310\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7p2lg" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.143563 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvcfr\" (UniqueName: \"kubernetes.io/projected/cfdd145e-d7b8-4078-aaa6-9b9827749b9a-kube-api-access-fvcfr\") pod \"cinder-operator-controller-manager-7478f7dbf9-7tzv4\" (UID: \"cfdd145e-d7b8-4078-aaa6-9b9827749b9a\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7tzv4" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.143590 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mzp7\" (UniqueName: \"kubernetes.io/projected/60bb147d-e703-4ac4-8068-aa416605b7b5-kube-api-access-4mzp7\") pod \"designate-operator-controller-manager-b45d7bf98-g5vdg\" (UID: \"60bb147d-e703-4ac4-8068-aa416605b7b5\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g5vdg" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.145113 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-g5vdg"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.166917 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-gbl72"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.168000 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gbl72" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.169709 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-vtqv8" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.179081 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-tjf5f"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.179870 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-tjf5f" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.184277 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-77dcn" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.187904 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-tjf5f"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.194862 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-gbl72"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.211642 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rhfb"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.212705 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rhfb" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.215813 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-cwvkf" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.218190 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.218889 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.233016 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.233600 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-q2ssx" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.244973 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rhfb"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.246249 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnrgv\" (UniqueName: \"kubernetes.io/projected/b8e9717b-6786-4882-99ae-bbcaa887e310-kube-api-access-lnrgv\") pod \"barbican-operator-controller-manager-7f86f8796f-7p2lg\" (UID: \"b8e9717b-6786-4882-99ae-bbcaa887e310\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7p2lg" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.246294 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvcfr\" (UniqueName: \"kubernetes.io/projected/cfdd145e-d7b8-4078-aaa6-9b9827749b9a-kube-api-access-fvcfr\") pod \"cinder-operator-controller-manager-7478f7dbf9-7tzv4\" (UID: \"cfdd145e-d7b8-4078-aaa6-9b9827749b9a\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7tzv4" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.246327 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mzp7\" (UniqueName: \"kubernetes.io/projected/60bb147d-e703-4ac4-8068-aa416605b7b5-kube-api-access-4mzp7\") pod \"designate-operator-controller-manager-b45d7bf98-g5vdg\" (UID: \"60bb147d-e703-4ac4-8068-aa416605b7b5\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g5vdg" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.246354 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcmpg\" (UniqueName: \"kubernetes.io/projected/dffa1b35-d981-4c5c-8df0-341e6a5941a6-kube-api-access-hcmpg\") pod \"infra-operator-controller-manager-7f6fb95f66-4xhrc\" (UID: \"dffa1b35-d981-4c5c-8df0-341e6a5941a6\") " pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.246389 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert\") pod \"infra-operator-controller-manager-7f6fb95f66-4xhrc\" (UID: \"dffa1b35-d981-4c5c-8df0-341e6a5941a6\") " pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.246410 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79fcn\" (UniqueName: \"kubernetes.io/projected/96f59ef6-bb4a-453d-9de2-ba5e0933df0a-kube-api-access-79fcn\") pod \"heat-operator-controller-manager-594c8c9d5d-tjf5f\" (UID: \"96f59ef6-bb4a-453d-9de2-ba5e0933df0a\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-tjf5f" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.246447 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64rwv\" (UniqueName: \"kubernetes.io/projected/22116ec0-0e77-4752-b374-ad20f73dc3f4-kube-api-access-64rwv\") pod \"glance-operator-controller-manager-78fdd796fd-gbl72\" (UID: \"22116ec0-0e77-4752-b374-ad20f73dc3f4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gbl72" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.246475 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf9tf\" (UniqueName: \"kubernetes.io/projected/1e946d3d-37fb-4bb6-8c8f-b7dcba782889-kube-api-access-tf9tf\") pod \"horizon-operator-controller-manager-77d5c5b54f-8rhfb\" (UID: \"1e946d3d-37fb-4bb6-8c8f-b7dcba782889\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rhfb" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.251679 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.271980 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-pvgqp"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.273228 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pvgqp" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.283894 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-nqshn" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.290287 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mzp7\" (UniqueName: \"kubernetes.io/projected/60bb147d-e703-4ac4-8068-aa416605b7b5-kube-api-access-4mzp7\") pod \"designate-operator-controller-manager-b45d7bf98-g5vdg\" (UID: \"60bb147d-e703-4ac4-8068-aa416605b7b5\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g5vdg" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.292652 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnrgv\" (UniqueName: \"kubernetes.io/projected/b8e9717b-6786-4882-99ae-bbcaa887e310-kube-api-access-lnrgv\") pod \"barbican-operator-controller-manager-7f86f8796f-7p2lg\" (UID: \"b8e9717b-6786-4882-99ae-bbcaa887e310\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7p2lg" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.294733 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-pvgqp"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.297193 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvcfr\" (UniqueName: \"kubernetes.io/projected/cfdd145e-d7b8-4078-aaa6-9b9827749b9a-kube-api-access-fvcfr\") pod \"cinder-operator-controller-manager-7478f7dbf9-7tzv4\" (UID: \"cfdd145e-d7b8-4078-aaa6-9b9827749b9a\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7tzv4" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.313313 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-kph4d"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.314092 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kph4d" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.315943 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-2jx4c" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.333115 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-kph4d"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.336967 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-g799f"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.337813 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g799f" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.340536 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-zhb4z" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.348454 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcmpg\" (UniqueName: \"kubernetes.io/projected/dffa1b35-d981-4c5c-8df0-341e6a5941a6-kube-api-access-hcmpg\") pod \"infra-operator-controller-manager-7f6fb95f66-4xhrc\" (UID: \"dffa1b35-d981-4c5c-8df0-341e6a5941a6\") " pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.348503 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert\") pod \"infra-operator-controller-manager-7f6fb95f66-4xhrc\" (UID: \"dffa1b35-d981-4c5c-8df0-341e6a5941a6\") " pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.348525 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79fcn\" (UniqueName: \"kubernetes.io/projected/96f59ef6-bb4a-453d-9de2-ba5e0933df0a-kube-api-access-79fcn\") pod \"heat-operator-controller-manager-594c8c9d5d-tjf5f\" (UID: \"96f59ef6-bb4a-453d-9de2-ba5e0933df0a\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-tjf5f" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.348564 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64rwv\" (UniqueName: \"kubernetes.io/projected/22116ec0-0e77-4752-b374-ad20f73dc3f4-kube-api-access-64rwv\") pod \"glance-operator-controller-manager-78fdd796fd-gbl72\" (UID: \"22116ec0-0e77-4752-b374-ad20f73dc3f4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gbl72" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.348587 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf9tf\" (UniqueName: \"kubernetes.io/projected/1e946d3d-37fb-4bb6-8c8f-b7dcba782889-kube-api-access-tf9tf\") pod \"horizon-operator-controller-manager-77d5c5b54f-8rhfb\" (UID: \"1e946d3d-37fb-4bb6-8c8f-b7dcba782889\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rhfb" Jan 27 10:07:37 crc kubenswrapper[4869]: E0127 10:07:37.349102 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 10:07:37 crc kubenswrapper[4869]: E0127 10:07:37.349149 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert podName:dffa1b35-d981-4c5c-8df0-341e6a5941a6 nodeName:}" failed. No retries permitted until 2026-01-27 10:07:37.849132799 +0000 UTC m=+826.469556882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert") pod "infra-operator-controller-manager-7f6fb95f66-4xhrc" (UID: "dffa1b35-d981-4c5c-8df0-341e6a5941a6") : secret "infra-operator-webhook-server-cert" not found Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.355301 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8bvfh"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.356070 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8bvfh" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.361421 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-g799f"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.362565 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-7j5t6" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.368036 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-dj764"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.368855 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dj764" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.374990 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8bvfh"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.377413 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-nggmn" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.378706 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcmpg\" (UniqueName: \"kubernetes.io/projected/dffa1b35-d981-4c5c-8df0-341e6a5941a6-kube-api-access-hcmpg\") pod \"infra-operator-controller-manager-7f6fb95f66-4xhrc\" (UID: \"dffa1b35-d981-4c5c-8df0-341e6a5941a6\") " pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.388512 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64rwv\" (UniqueName: \"kubernetes.io/projected/22116ec0-0e77-4752-b374-ad20f73dc3f4-kube-api-access-64rwv\") pod \"glance-operator-controller-manager-78fdd796fd-gbl72\" (UID: \"22116ec0-0e77-4752-b374-ad20f73dc3f4\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gbl72" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.392413 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf9tf\" (UniqueName: \"kubernetes.io/projected/1e946d3d-37fb-4bb6-8c8f-b7dcba782889-kube-api-access-tf9tf\") pod \"horizon-operator-controller-manager-77d5c5b54f-8rhfb\" (UID: \"1e946d3d-37fb-4bb6-8c8f-b7dcba782889\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rhfb" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.392414 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79fcn\" (UniqueName: \"kubernetes.io/projected/96f59ef6-bb4a-453d-9de2-ba5e0933df0a-kube-api-access-79fcn\") pod \"heat-operator-controller-manager-594c8c9d5d-tjf5f\" (UID: \"96f59ef6-bb4a-453d-9de2-ba5e0933df0a\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-tjf5f" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.397215 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-xnt8t"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.398015 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xnt8t" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.401124 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-dj764"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.403865 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7p2lg" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.414121 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-ddrc6" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.420300 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-xnt8t"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.426264 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7tzv4" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.438805 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-h6pnx"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.439514 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-h6pnx" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.445734 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g5vdg" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.446697 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-whnxw" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.449562 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2m77\" (UniqueName: \"kubernetes.io/projected/fa54c4d9-8d7b-4284-bb64-d21d21e9a83e-kube-api-access-q2m77\") pod \"manila-operator-controller-manager-78c6999f6f-g799f\" (UID: \"fa54c4d9-8d7b-4284-bb64-d21d21e9a83e\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g799f" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.449668 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqrmc\" (UniqueName: \"kubernetes.io/projected/5a1cd9b4-00f9-430f-8857-718672e03003-kube-api-access-lqrmc\") pod \"ironic-operator-controller-manager-598f7747c9-pvgqp\" (UID: \"5a1cd9b4-00f9-430f-8857-718672e03003\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pvgqp" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.449693 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bzd9\" (UniqueName: \"kubernetes.io/projected/95e36175-15e3-4f1f-8063-5f3bade317b6-kube-api-access-7bzd9\") pod \"keystone-operator-controller-manager-b8b6d4659-kph4d\" (UID: \"95e36175-15e3-4f1f-8063-5f3bade317b6\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kph4d" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.459712 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-h6pnx"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.482903 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.483659 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.496061 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gbl72" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.497323 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-4wh9x" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.497527 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.514076 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-tjf5f" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.519906 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-bjt9d"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.520952 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bjt9d" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.537563 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-jqlbt" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.546839 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.559541 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgk76\" (UniqueName: \"kubernetes.io/projected/a3cdb036-7094-48e3-9d3d-8699ece77b88-kube-api-access-dgk76\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-8bvfh\" (UID: \"a3cdb036-7094-48e3-9d3d-8699ece77b88\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8bvfh" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.559600 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-572hb\" (UniqueName: \"kubernetes.io/projected/46e73157-89a7-4ca4-b71f-2f2e05181ea1-kube-api-access-572hb\") pod \"neutron-operator-controller-manager-78d58447c5-dj764\" (UID: \"46e73157-89a7-4ca4-b71f-2f2e05181ea1\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dj764" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.559621 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v8cg\" (UniqueName: \"kubernetes.io/projected/facb1993-c676-4104-9090-8f8b4d8576ed-kube-api-access-7v8cg\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2\" (UID: \"facb1993-c676-4104-9090-8f8b4d8576ed\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.559643 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk7mk\" (UniqueName: \"kubernetes.io/projected/3f283669-e4aa-48ca-b487-c1f34759f97a-kube-api-access-lk7mk\") pod \"octavia-operator-controller-manager-5f4cd88d46-h6pnx\" (UID: \"3f283669-e4aa-48ca-b487-c1f34759f97a\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-h6pnx" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.559670 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cqzn\" (UniqueName: \"kubernetes.io/projected/24283d6a-6945-4ce8-991e-25102b2a0bea-kube-api-access-8cqzn\") pod \"ovn-operator-controller-manager-6f75f45d54-bjt9d\" (UID: \"24283d6a-6945-4ce8-991e-25102b2a0bea\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bjt9d" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.559692 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqrmc\" (UniqueName: \"kubernetes.io/projected/5a1cd9b4-00f9-430f-8857-718672e03003-kube-api-access-lqrmc\") pod \"ironic-operator-controller-manager-598f7747c9-pvgqp\" (UID: \"5a1cd9b4-00f9-430f-8857-718672e03003\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pvgqp" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.559708 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bzd9\" (UniqueName: \"kubernetes.io/projected/95e36175-15e3-4f1f-8063-5f3bade317b6-kube-api-access-7bzd9\") pod \"keystone-operator-controller-manager-b8b6d4659-kph4d\" (UID: \"95e36175-15e3-4f1f-8063-5f3bade317b6\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kph4d" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.559751 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85kpx\" (UniqueName: \"kubernetes.io/projected/33da9d5c-c09e-492d-b23d-6cc5ceaef8b9-kube-api-access-85kpx\") pod \"nova-operator-controller-manager-7bdb645866-xnt8t\" (UID: \"33da9d5c-c09e-492d-b23d-6cc5ceaef8b9\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xnt8t" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.559768 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2m77\" (UniqueName: \"kubernetes.io/projected/fa54c4d9-8d7b-4284-bb64-d21d21e9a83e-kube-api-access-q2m77\") pod \"manila-operator-controller-manager-78c6999f6f-g799f\" (UID: \"fa54c4d9-8d7b-4284-bb64-d21d21e9a83e\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g799f" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.559786 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2\" (UID: \"facb1993-c676-4104-9090-8f8b4d8576ed\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.559903 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-rscb2"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.560602 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rscb2" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.567131 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-dm62f" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.568873 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rhfb" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.638807 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-bjt9d"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.639161 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-rscb2"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.661453 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2m77\" (UniqueName: \"kubernetes.io/projected/fa54c4d9-8d7b-4284-bb64-d21d21e9a83e-kube-api-access-q2m77\") pod \"manila-operator-controller-manager-78c6999f6f-g799f\" (UID: \"fa54c4d9-8d7b-4284-bb64-d21d21e9a83e\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g799f" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.666638 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk7mk\" (UniqueName: \"kubernetes.io/projected/3f283669-e4aa-48ca-b487-c1f34759f97a-kube-api-access-lk7mk\") pod \"octavia-operator-controller-manager-5f4cd88d46-h6pnx\" (UID: \"3f283669-e4aa-48ca-b487-c1f34759f97a\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-h6pnx" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.666768 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cqzn\" (UniqueName: \"kubernetes.io/projected/24283d6a-6945-4ce8-991e-25102b2a0bea-kube-api-access-8cqzn\") pod \"ovn-operator-controller-manager-6f75f45d54-bjt9d\" (UID: \"24283d6a-6945-4ce8-991e-25102b2a0bea\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bjt9d" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.666920 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85kpx\" (UniqueName: \"kubernetes.io/projected/33da9d5c-c09e-492d-b23d-6cc5ceaef8b9-kube-api-access-85kpx\") pod \"nova-operator-controller-manager-7bdb645866-xnt8t\" (UID: \"33da9d5c-c09e-492d-b23d-6cc5ceaef8b9\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xnt8t" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.670976 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2\" (UID: \"facb1993-c676-4104-9090-8f8b4d8576ed\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.671081 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgk76\" (UniqueName: \"kubernetes.io/projected/a3cdb036-7094-48e3-9d3d-8699ece77b88-kube-api-access-dgk76\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-8bvfh\" (UID: \"a3cdb036-7094-48e3-9d3d-8699ece77b88\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8bvfh" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.671193 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-572hb\" (UniqueName: \"kubernetes.io/projected/46e73157-89a7-4ca4-b71f-2f2e05181ea1-kube-api-access-572hb\") pod \"neutron-operator-controller-manager-78d58447c5-dj764\" (UID: \"46e73157-89a7-4ca4-b71f-2f2e05181ea1\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dj764" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.671278 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v8cg\" (UniqueName: \"kubernetes.io/projected/facb1993-c676-4104-9090-8f8b4d8576ed-kube-api-access-7v8cg\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2\" (UID: \"facb1993-c676-4104-9090-8f8b4d8576ed\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" Jan 27 10:07:37 crc kubenswrapper[4869]: E0127 10:07:37.671190 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 10:07:37 crc kubenswrapper[4869]: E0127 10:07:37.671758 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert podName:facb1993-c676-4104-9090-8f8b4d8576ed nodeName:}" failed. No retries permitted until 2026-01-27 10:07:38.171742341 +0000 UTC m=+826.792166424 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" (UID: "facb1993-c676-4104-9090-8f8b4d8576ed") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.690083 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqrmc\" (UniqueName: \"kubernetes.io/projected/5a1cd9b4-00f9-430f-8857-718672e03003-kube-api-access-lqrmc\") pod \"ironic-operator-controller-manager-598f7747c9-pvgqp\" (UID: \"5a1cd9b4-00f9-430f-8857-718672e03003\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pvgqp" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.707274 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgk76\" (UniqueName: \"kubernetes.io/projected/a3cdb036-7094-48e3-9d3d-8699ece77b88-kube-api-access-dgk76\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-8bvfh\" (UID: \"a3cdb036-7094-48e3-9d3d-8699ece77b88\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8bvfh" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.708089 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bzd9\" (UniqueName: \"kubernetes.io/projected/95e36175-15e3-4f1f-8063-5f3bade317b6-kube-api-access-7bzd9\") pod \"keystone-operator-controller-manager-b8b6d4659-kph4d\" (UID: \"95e36175-15e3-4f1f-8063-5f3bade317b6\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kph4d" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.723152 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-572hb\" (UniqueName: \"kubernetes.io/projected/46e73157-89a7-4ca4-b71f-2f2e05181ea1-kube-api-access-572hb\") pod \"neutron-operator-controller-manager-78d58447c5-dj764\" (UID: \"46e73157-89a7-4ca4-b71f-2f2e05181ea1\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dj764" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.740509 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cqzn\" (UniqueName: \"kubernetes.io/projected/24283d6a-6945-4ce8-991e-25102b2a0bea-kube-api-access-8cqzn\") pod \"ovn-operator-controller-manager-6f75f45d54-bjt9d\" (UID: \"24283d6a-6945-4ce8-991e-25102b2a0bea\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bjt9d" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.741045 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk7mk\" (UniqueName: \"kubernetes.io/projected/3f283669-e4aa-48ca-b487-c1f34759f97a-kube-api-access-lk7mk\") pod \"octavia-operator-controller-manager-5f4cd88d46-h6pnx\" (UID: \"3f283669-e4aa-48ca-b487-c1f34759f97a\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-h6pnx" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.741501 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85kpx\" (UniqueName: \"kubernetes.io/projected/33da9d5c-c09e-492d-b23d-6cc5ceaef8b9-kube-api-access-85kpx\") pod \"nova-operator-controller-manager-7bdb645866-xnt8t\" (UID: \"33da9d5c-c09e-492d-b23d-6cc5ceaef8b9\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xnt8t" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.760403 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v8cg\" (UniqueName: \"kubernetes.io/projected/facb1993-c676-4104-9090-8f8b4d8576ed-kube-api-access-7v8cg\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2\" (UID: \"facb1993-c676-4104-9090-8f8b4d8576ed\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.787511 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g799f" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.796595 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8bvfh" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.797299 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm8k8\" (UniqueName: \"kubernetes.io/projected/50c0b859-fc98-4727-a1e2-cd0397e17bb7-kube-api-access-lm8k8\") pod \"placement-operator-controller-manager-79d5ccc684-rscb2\" (UID: \"50c0b859-fc98-4727-a1e2-cd0397e17bb7\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rscb2" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.814946 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-g2cx7"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.815886 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-g2cx7" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.826199 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dj764" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.826583 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-g2cx7"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.836299 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-rqg27" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.842881 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-6xdnp"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.843743 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-6xdnp" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.853934 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-6xdnp"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.854017 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-f65jk" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.859060 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xnt8t" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.899341 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm8k8\" (UniqueName: \"kubernetes.io/projected/50c0b859-fc98-4727-a1e2-cd0397e17bb7-kube-api-access-lm8k8\") pod \"placement-operator-controller-manager-79d5ccc684-rscb2\" (UID: \"50c0b859-fc98-4727-a1e2-cd0397e17bb7\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rscb2" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.899571 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5v66\" (UniqueName: \"kubernetes.io/projected/e1367e9a-318d-4800-926b-e0fe5cadf9b7-kube-api-access-l5v66\") pod \"swift-operator-controller-manager-547cbdb99f-g2cx7\" (UID: \"e1367e9a-318d-4800-926b-e0fe5cadf9b7\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-g2cx7" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.899706 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert\") pod \"infra-operator-controller-manager-7f6fb95f66-4xhrc\" (UID: \"dffa1b35-d981-4c5c-8df0-341e6a5941a6\") " pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.899774 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptvr8\" (UniqueName: \"kubernetes.io/projected/5b08b641-c912-4e41-911c-6d46e9d589c9-kube-api-access-ptvr8\") pod \"telemetry-operator-controller-manager-85cd9769bb-6xdnp\" (UID: \"5b08b641-c912-4e41-911c-6d46e9d589c9\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-6xdnp" Jan 27 10:07:37 crc kubenswrapper[4869]: E0127 10:07:37.900179 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 10:07:37 crc kubenswrapper[4869]: E0127 10:07:37.900278 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert podName:dffa1b35-d981-4c5c-8df0-341e6a5941a6 nodeName:}" failed. No retries permitted until 2026-01-27 10:07:38.900263794 +0000 UTC m=+827.520687877 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert") pod "infra-operator-controller-manager-7f6fb95f66-4xhrc" (UID: "dffa1b35-d981-4c5c-8df0-341e6a5941a6") : secret "infra-operator-webhook-server-cert" not found Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.916501 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-hj2l8"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.917859 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2l8" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.923904 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-njmjc" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.925374 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-hj2l8"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.939669 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm8k8\" (UniqueName: \"kubernetes.io/projected/50c0b859-fc98-4727-a1e2-cd0397e17bb7-kube-api-access-lm8k8\") pod \"placement-operator-controller-manager-79d5ccc684-rscb2\" (UID: \"50c0b859-fc98-4727-a1e2-cd0397e17bb7\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rscb2" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.942231 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-h6pnx" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.944395 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pvgqp" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.948684 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-dgc6k"] Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.949619 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-dgc6k" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.954138 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-frwxn" Jan 27 10:07:37 crc kubenswrapper[4869]: I0127 10:07:37.966416 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kph4d" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.003392 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5v66\" (UniqueName: \"kubernetes.io/projected/e1367e9a-318d-4800-926b-e0fe5cadf9b7-kube-api-access-l5v66\") pod \"swift-operator-controller-manager-547cbdb99f-g2cx7\" (UID: \"e1367e9a-318d-4800-926b-e0fe5cadf9b7\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-g2cx7" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.003479 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptvr8\" (UniqueName: \"kubernetes.io/projected/5b08b641-c912-4e41-911c-6d46e9d589c9-kube-api-access-ptvr8\") pod \"telemetry-operator-controller-manager-85cd9769bb-6xdnp\" (UID: \"5b08b641-c912-4e41-911c-6d46e9d589c9\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-6xdnp" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.015061 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bjt9d" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.028021 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-dgc6k"] Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.043724 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptvr8\" (UniqueName: \"kubernetes.io/projected/5b08b641-c912-4e41-911c-6d46e9d589c9-kube-api-access-ptvr8\") pod \"telemetry-operator-controller-manager-85cd9769bb-6xdnp\" (UID: \"5b08b641-c912-4e41-911c-6d46e9d589c9\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-6xdnp" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.061698 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rscb2" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.071597 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5v66\" (UniqueName: \"kubernetes.io/projected/e1367e9a-318d-4800-926b-e0fe5cadf9b7-kube-api-access-l5v66\") pod \"swift-operator-controller-manager-547cbdb99f-g2cx7\" (UID: \"e1367e9a-318d-4800-926b-e0fe5cadf9b7\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-g2cx7" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.104366 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m497\" (UniqueName: \"kubernetes.io/projected/61deff8e-df98-4cc4-86de-f60d12c8cfb9-kube-api-access-8m497\") pod \"watcher-operator-controller-manager-564965969-dgc6k\" (UID: \"61deff8e-df98-4cc4-86de-f60d12c8cfb9\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-dgc6k" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.104465 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5zlb\" (UniqueName: \"kubernetes.io/projected/c2bdaaef-bf80-4141-9d4f-d0942aa15e4e-kube-api-access-c5zlb\") pod \"test-operator-controller-manager-69797bbcbd-hj2l8\" (UID: \"c2bdaaef-bf80-4141-9d4f-d0942aa15e4e\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2l8" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.126483 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz"] Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.127330 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.131727 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.131906 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.134546 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-b6s8l" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.163034 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz"] Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.182384 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-g2cx7" Jan 27 10:07:38 crc kubenswrapper[4869]: W0127 10:07:38.193568 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfdd145e_d7b8_4078_aaa6_9b9827749b9a.slice/crio-9c5d42ddfa6a24ad15c250aa0f269a379bd0b46076de6c6788245513419e7294 WatchSource:0}: Error finding container 9c5d42ddfa6a24ad15c250aa0f269a379bd0b46076de6c6788245513419e7294: Status 404 returned error can't find the container with id 9c5d42ddfa6a24ad15c250aa0f269a379bd0b46076de6c6788245513419e7294 Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.193609 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2vjxv"] Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.194883 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2vjxv" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.197779 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-t6d8x" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.199000 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-6xdnp" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.201241 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2vjxv"] Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.205527 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m497\" (UniqueName: \"kubernetes.io/projected/61deff8e-df98-4cc4-86de-f60d12c8cfb9-kube-api-access-8m497\") pod \"watcher-operator-controller-manager-564965969-dgc6k\" (UID: \"61deff8e-df98-4cc4-86de-f60d12c8cfb9\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-dgc6k" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.205620 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2\" (UID: \"facb1993-c676-4104-9090-8f8b4d8576ed\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.205652 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5zlb\" (UniqueName: \"kubernetes.io/projected/c2bdaaef-bf80-4141-9d4f-d0942aa15e4e-kube-api-access-c5zlb\") pod \"test-operator-controller-manager-69797bbcbd-hj2l8\" (UID: \"c2bdaaef-bf80-4141-9d4f-d0942aa15e4e\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2l8" Jan 27 10:07:38 crc kubenswrapper[4869]: E0127 10:07:38.206126 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 10:07:38 crc kubenswrapper[4869]: E0127 10:07:38.206181 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert podName:facb1993-c676-4104-9090-8f8b4d8576ed nodeName:}" failed. No retries permitted until 2026-01-27 10:07:39.206162688 +0000 UTC m=+827.826586771 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" (UID: "facb1993-c676-4104-9090-8f8b4d8576ed") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.217532 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-7p2lg"] Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.224711 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5zlb\" (UniqueName: \"kubernetes.io/projected/c2bdaaef-bf80-4141-9d4f-d0942aa15e4e-kube-api-access-c5zlb\") pod \"test-operator-controller-manager-69797bbcbd-hj2l8\" (UID: \"c2bdaaef-bf80-4141-9d4f-d0942aa15e4e\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2l8" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.225410 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m497\" (UniqueName: \"kubernetes.io/projected/61deff8e-df98-4cc4-86de-f60d12c8cfb9-kube-api-access-8m497\") pod \"watcher-operator-controller-manager-564965969-dgc6k\" (UID: \"61deff8e-df98-4cc4-86de-f60d12c8cfb9\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-dgc6k" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.231947 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7tzv4"] Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.245634 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2l8" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.314316 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.314637 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjkv8\" (UniqueName: \"kubernetes.io/projected/bbc5d8d8-48d4-4b4f-96ee-87e21cf68ed2-kube-api-access-qjkv8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2vjxv\" (UID: \"bbc5d8d8-48d4-4b4f-96ee-87e21cf68ed2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2vjxv" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.314674 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.314745 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzb5w\" (UniqueName: \"kubernetes.io/projected/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-kube-api-access-vzb5w\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.319055 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-dgc6k" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.416213 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzb5w\" (UniqueName: \"kubernetes.io/projected/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-kube-api-access-vzb5w\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.416271 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.416298 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjkv8\" (UniqueName: \"kubernetes.io/projected/bbc5d8d8-48d4-4b4f-96ee-87e21cf68ed2-kube-api-access-qjkv8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2vjxv\" (UID: \"bbc5d8d8-48d4-4b4f-96ee-87e21cf68ed2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2vjxv" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.416334 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:38 crc kubenswrapper[4869]: E0127 10:07:38.416464 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 10:07:38 crc kubenswrapper[4869]: E0127 10:07:38.416468 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 10:07:38 crc kubenswrapper[4869]: E0127 10:07:38.416514 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs podName:6c54aaba-e55e-4168-9078-0de1b3f7e7fe nodeName:}" failed. No retries permitted until 2026-01-27 10:07:38.916499144 +0000 UTC m=+827.536923227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs") pod "openstack-operator-controller-manager-7db7c99649-zbtgz" (UID: "6c54aaba-e55e-4168-9078-0de1b3f7e7fe") : secret "webhook-server-cert" not found Jan 27 10:07:38 crc kubenswrapper[4869]: E0127 10:07:38.416527 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs podName:6c54aaba-e55e-4168-9078-0de1b3f7e7fe nodeName:}" failed. No retries permitted until 2026-01-27 10:07:38.916522175 +0000 UTC m=+827.536946258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs") pod "openstack-operator-controller-manager-7db7c99649-zbtgz" (UID: "6c54aaba-e55e-4168-9078-0de1b3f7e7fe") : secret "metrics-server-cert" not found Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.446579 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzb5w\" (UniqueName: \"kubernetes.io/projected/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-kube-api-access-vzb5w\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.458089 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjkv8\" (UniqueName: \"kubernetes.io/projected/bbc5d8d8-48d4-4b4f-96ee-87e21cf68ed2-kube-api-access-qjkv8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2vjxv\" (UID: \"bbc5d8d8-48d4-4b4f-96ee-87e21cf68ed2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2vjxv" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.552164 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2vjxv" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.576606 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7tzv4" event={"ID":"cfdd145e-d7b8-4078-aaa6-9b9827749b9a","Type":"ContainerStarted","Data":"9c5d42ddfa6a24ad15c250aa0f269a379bd0b46076de6c6788245513419e7294"} Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.579773 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7p2lg" event={"ID":"b8e9717b-6786-4882-99ae-bbcaa887e310","Type":"ContainerStarted","Data":"763c4b94273d949558348fd6e25a81cc1b6ce7b719e34e2e190b1331351ae896"} Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.794555 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-dj764"] Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.801505 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-tjf5f"] Jan 27 10:07:38 crc kubenswrapper[4869]: W0127 10:07:38.811993 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96f59ef6_bb4a_453d_9de2_ba5e0933df0a.slice/crio-989683ce148856bf4cc1669518b06e8a4d14716fb083ea5cc88ce36d8f244599 WatchSource:0}: Error finding container 989683ce148856bf4cc1669518b06e8a4d14716fb083ea5cc88ce36d8f244599: Status 404 returned error can't find the container with id 989683ce148856bf4cc1669518b06e8a4d14716fb083ea5cc88ce36d8f244599 Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.814354 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-g5vdg"] Jan 27 10:07:38 crc kubenswrapper[4869]: W0127 10:07:38.817607 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60bb147d_e703_4ac4_8068_aa416605b7b5.slice/crio-fd2ee3bbf1b239a2df611505cca5fb931998e3687a6d82ff888e09b53986672f WatchSource:0}: Error finding container fd2ee3bbf1b239a2df611505cca5fb931998e3687a6d82ff888e09b53986672f: Status 404 returned error can't find the container with id fd2ee3bbf1b239a2df611505cca5fb931998e3687a6d82ff888e09b53986672f Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.828917 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rhfb"] Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.841572 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-gbl72"] Jan 27 10:07:38 crc kubenswrapper[4869]: W0127 10:07:38.892938 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa54c4d9_8d7b_4284_bb64_d21d21e9a83e.slice/crio-22709f52a2c26d517a9888eca1c13ccc8180ec5acba60407640e4d4ccf9e5e7f WatchSource:0}: Error finding container 22709f52a2c26d517a9888eca1c13ccc8180ec5acba60407640e4d4ccf9e5e7f: Status 404 returned error can't find the container with id 22709f52a2c26d517a9888eca1c13ccc8180ec5acba60407640e4d4ccf9e5e7f Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.891585 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-g799f"] Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.908074 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8bvfh"] Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.926817 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.926929 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.927010 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert\") pod \"infra-operator-controller-manager-7f6fb95f66-4xhrc\" (UID: \"dffa1b35-d981-4c5c-8df0-341e6a5941a6\") " pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" Jan 27 10:07:38 crc kubenswrapper[4869]: E0127 10:07:38.927184 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 10:07:38 crc kubenswrapper[4869]: E0127 10:07:38.927259 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert podName:dffa1b35-d981-4c5c-8df0-341e6a5941a6 nodeName:}" failed. No retries permitted until 2026-01-27 10:07:40.927220862 +0000 UTC m=+829.547644945 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert") pod "infra-operator-controller-manager-7f6fb95f66-4xhrc" (UID: "dffa1b35-d981-4c5c-8df0-341e6a5941a6") : secret "infra-operator-webhook-server-cert" not found Jan 27 10:07:38 crc kubenswrapper[4869]: E0127 10:07:38.927704 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 10:07:38 crc kubenswrapper[4869]: E0127 10:07:38.927736 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs podName:6c54aaba-e55e-4168-9078-0de1b3f7e7fe nodeName:}" failed. No retries permitted until 2026-01-27 10:07:39.927727059 +0000 UTC m=+828.548151142 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs") pod "openstack-operator-controller-manager-7db7c99649-zbtgz" (UID: "6c54aaba-e55e-4168-9078-0de1b3f7e7fe") : secret "webhook-server-cert" not found Jan 27 10:07:38 crc kubenswrapper[4869]: E0127 10:07:38.927802 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 10:07:38 crc kubenswrapper[4869]: E0127 10:07:38.927842 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs podName:6c54aaba-e55e-4168-9078-0de1b3f7e7fe nodeName:}" failed. No retries permitted until 2026-01-27 10:07:39.927819512 +0000 UTC m=+828.548243595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs") pod "openstack-operator-controller-manager-7db7c99649-zbtgz" (UID: "6c54aaba-e55e-4168-9078-0de1b3f7e7fe") : secret "metrics-server-cert" not found Jan 27 10:07:38 crc kubenswrapper[4869]: I0127 10:07:38.984287 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-xnt8t"] Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.013953 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-h6pnx"] Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.027433 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-rscb2"] Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.035447 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ptvr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-6xdnp_openstack-operators(5b08b641-c912-4e41-911c-6d46e9d589c9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.035552 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8cqzn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-bjt9d_openstack-operators(24283d6a-6945-4ce8-991e-25102b2a0bea): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.036705 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bjt9d" podUID="24283d6a-6945-4ce8-991e-25102b2a0bea" Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.036728 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-6xdnp" podUID="5b08b641-c912-4e41-911c-6d46e9d589c9" Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.038362 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-6xdnp"] Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.043575 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-g2cx7"] Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.048935 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-bjt9d"] Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.172081 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-pvgqp"] Jan 27 10:07:39 crc kubenswrapper[4869]: W0127 10:07:39.177960 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a1cd9b4_00f9_430f_8857_718672e03003.slice/crio-e3d727f4624b9406221ffef57f5942cafc09682b5aadd4631ddc61f1fb8cee53 WatchSource:0}: Error finding container e3d727f4624b9406221ffef57f5942cafc09682b5aadd4631ddc61f1fb8cee53: Status 404 returned error can't find the container with id e3d727f4624b9406221ffef57f5942cafc09682b5aadd4631ddc61f1fb8cee53 Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.182625 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lqrmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-pvgqp_openstack-operators(5a1cd9b4-00f9-430f-8857-718672e03003): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.184082 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pvgqp" podUID="5a1cd9b4-00f9-430f-8857-718672e03003" Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.190650 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-kph4d"] Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.194493 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7bzd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-kph4d_openstack-operators(95e36175-15e3-4f1f-8063-5f3bade317b6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.195819 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kph4d" podUID="95e36175-15e3-4f1f-8063-5f3bade317b6" Jan 27 10:07:39 crc kubenswrapper[4869]: W0127 10:07:39.202917 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2bdaaef_bf80_4141_9d4f_d0942aa15e4e.slice/crio-684d6a4aa89c5462b72a3de7f6107a7d94c03c1df0a72745e441b12eb6c32c0e WatchSource:0}: Error finding container 684d6a4aa89c5462b72a3de7f6107a7d94c03c1df0a72745e441b12eb6c32c0e: Status 404 returned error can't find the container with id 684d6a4aa89c5462b72a3de7f6107a7d94c03c1df0a72745e441b12eb6c32c0e Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.203601 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-hj2l8"] Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.210932 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-dgc6k"] Jan 27 10:07:39 crc kubenswrapper[4869]: W0127 10:07:39.216737 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61deff8e_df98_4cc4_86de_f60d12c8cfb9.slice/crio-e7d94f37d0f187a9a1b0acb52763bcc05b445e9bd523ac3251b0d772c1bdd71b WatchSource:0}: Error finding container e7d94f37d0f187a9a1b0acb52763bcc05b445e9bd523ac3251b0d772c1bdd71b: Status 404 returned error can't find the container with id e7d94f37d0f187a9a1b0acb52763bcc05b445e9bd523ac3251b0d772c1bdd71b Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.220552 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8m497,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-dgc6k_openstack-operators(61deff8e-df98-4cc4-86de-f60d12c8cfb9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.221871 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-dgc6k" podUID="61deff8e-df98-4cc4-86de-f60d12c8cfb9" Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.225764 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2vjxv"] Jan 27 10:07:39 crc kubenswrapper[4869]: W0127 10:07:39.229355 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbc5d8d8_48d4_4b4f_96ee_87e21cf68ed2.slice/crio-e961632d1071acec3d8ed7962688c15339d855c9a2c703d80d0b692f456f222a WatchSource:0}: Error finding container e961632d1071acec3d8ed7962688c15339d855c9a2c703d80d0b692f456f222a: Status 404 returned error can't find the container with id e961632d1071acec3d8ed7962688c15339d855c9a2c703d80d0b692f456f222a Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.230780 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2\" (UID: \"facb1993-c676-4104-9090-8f8b4d8576ed\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.230984 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.231028 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert podName:facb1993-c676-4104-9090-8f8b4d8576ed nodeName:}" failed. No retries permitted until 2026-01-27 10:07:41.231013607 +0000 UTC m=+829.851437690 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" (UID: "facb1993-c676-4104-9090-8f8b4d8576ed") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.232358 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qjkv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-2vjxv_openstack-operators(bbc5d8d8-48d4-4b4f-96ee-87e21cf68ed2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.234180 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2vjxv" podUID="bbc5d8d8-48d4-4b4f-96ee-87e21cf68ed2" Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.590381 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-h6pnx" event={"ID":"3f283669-e4aa-48ca-b487-c1f34759f97a","Type":"ContainerStarted","Data":"a8d73e09fd882e8f746c003364ebe1222cc29fc182450b34cd96634133376173"} Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.592003 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kph4d" event={"ID":"95e36175-15e3-4f1f-8063-5f3bade317b6","Type":"ContainerStarted","Data":"4ea397a70e64fc9605abfe712a99afb08ee78dff67e4f0812f4bbe14fc9384ff"} Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.594944 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kph4d" podUID="95e36175-15e3-4f1f-8063-5f3bade317b6" Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.595986 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2vjxv" event={"ID":"bbc5d8d8-48d4-4b4f-96ee-87e21cf68ed2","Type":"ContainerStarted","Data":"e961632d1071acec3d8ed7962688c15339d855c9a2c703d80d0b692f456f222a"} Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.598062 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2vjxv" podUID="bbc5d8d8-48d4-4b4f-96ee-87e21cf68ed2" Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.598615 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-dgc6k" event={"ID":"61deff8e-df98-4cc4-86de-f60d12c8cfb9","Type":"ContainerStarted","Data":"e7d94f37d0f187a9a1b0acb52763bcc05b445e9bd523ac3251b0d772c1bdd71b"} Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.602198 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-dgc6k" podUID="61deff8e-df98-4cc4-86de-f60d12c8cfb9" Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.603621 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gbl72" event={"ID":"22116ec0-0e77-4752-b374-ad20f73dc3f4","Type":"ContainerStarted","Data":"46361620e4f0d1a2466954233e778bb2445d905e4ced103ac129d526daca4e2c"} Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.611682 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xnt8t" event={"ID":"33da9d5c-c09e-492d-b23d-6cc5ceaef8b9","Type":"ContainerStarted","Data":"b0586e2028c02aa0ea74c34d1cd4237748e6d6f5b561d4faffe44577b5d48c44"} Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.615198 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g799f" event={"ID":"fa54c4d9-8d7b-4284-bb64-d21d21e9a83e","Type":"ContainerStarted","Data":"22709f52a2c26d517a9888eca1c13ccc8180ec5acba60407640e4d4ccf9e5e7f"} Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.617394 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-tjf5f" event={"ID":"96f59ef6-bb4a-453d-9de2-ba5e0933df0a","Type":"ContainerStarted","Data":"989683ce148856bf4cc1669518b06e8a4d14716fb083ea5cc88ce36d8f244599"} Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.618586 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rscb2" event={"ID":"50c0b859-fc98-4727-a1e2-cd0397e17bb7","Type":"ContainerStarted","Data":"0d4332c0198d2dda30cc434f74a06d5e7845a61918fc05ebdd5b9128d3a67c58"} Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.626209 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8bvfh" event={"ID":"a3cdb036-7094-48e3-9d3d-8699ece77b88","Type":"ContainerStarted","Data":"75422f400e68a04a7114efd4d50d18bde4d750fa01066a5ed8990772a18ad3c2"} Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.627464 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-6xdnp" event={"ID":"5b08b641-c912-4e41-911c-6d46e9d589c9","Type":"ContainerStarted","Data":"f42f28e3698ce92af8ecb65dc7e109f998d4db027b5dc81678f37dac9c67e75f"} Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.629581 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-6xdnp" podUID="5b08b641-c912-4e41-911c-6d46e9d589c9" Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.639503 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-g2cx7" event={"ID":"e1367e9a-318d-4800-926b-e0fe5cadf9b7","Type":"ContainerStarted","Data":"8c5d9ef75781d7e2eb0ee0d3014b1a4ef068990696412a42add3a3e9ee90932f"} Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.641868 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2l8" event={"ID":"c2bdaaef-bf80-4141-9d4f-d0942aa15e4e","Type":"ContainerStarted","Data":"684d6a4aa89c5462b72a3de7f6107a7d94c03c1df0a72745e441b12eb6c32c0e"} Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.645812 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bjt9d" event={"ID":"24283d6a-6945-4ce8-991e-25102b2a0bea","Type":"ContainerStarted","Data":"10eb9eea692e5f33023cfae68683516e6f1867daef9f93acc9980489a2b93b97"} Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.654253 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bjt9d" podUID="24283d6a-6945-4ce8-991e-25102b2a0bea" Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.657585 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rhfb" event={"ID":"1e946d3d-37fb-4bb6-8c8f-b7dcba782889","Type":"ContainerStarted","Data":"3e47d04ac0a6026709ffeada075e947081c8f7b5e2072917bbb45467b2e9d1b9"} Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.662996 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dj764" event={"ID":"46e73157-89a7-4ca4-b71f-2f2e05181ea1","Type":"ContainerStarted","Data":"086e70f9dca0aa8f4b43a2410fd9e4e3eed5d9afaa2c1279056241999a98b1fe"} Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.663499 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g5vdg" event={"ID":"60bb147d-e703-4ac4-8068-aa416605b7b5","Type":"ContainerStarted","Data":"fd2ee3bbf1b239a2df611505cca5fb931998e3687a6d82ff888e09b53986672f"} Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.666026 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pvgqp" event={"ID":"5a1cd9b4-00f9-430f-8857-718672e03003","Type":"ContainerStarted","Data":"e3d727f4624b9406221ffef57f5942cafc09682b5aadd4631ddc61f1fb8cee53"} Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.672546 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pvgqp" podUID="5a1cd9b4-00f9-430f-8857-718672e03003" Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.943731 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:39 crc kubenswrapper[4869]: I0127 10:07:39.943804 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.944018 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.944075 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs podName:6c54aaba-e55e-4168-9078-0de1b3f7e7fe nodeName:}" failed. No retries permitted until 2026-01-27 10:07:41.944060988 +0000 UTC m=+830.564485071 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs") pod "openstack-operator-controller-manager-7db7c99649-zbtgz" (UID: "6c54aaba-e55e-4168-9078-0de1b3f7e7fe") : secret "metrics-server-cert" not found Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.944103 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 10:07:39 crc kubenswrapper[4869]: E0127 10:07:39.944186 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs podName:6c54aaba-e55e-4168-9078-0de1b3f7e7fe nodeName:}" failed. No retries permitted until 2026-01-27 10:07:41.944168481 +0000 UTC m=+830.564592564 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs") pod "openstack-operator-controller-manager-7db7c99649-zbtgz" (UID: "6c54aaba-e55e-4168-9078-0de1b3f7e7fe") : secret "webhook-server-cert" not found Jan 27 10:07:40 crc kubenswrapper[4869]: E0127 10:07:40.675931 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2vjxv" podUID="bbc5d8d8-48d4-4b4f-96ee-87e21cf68ed2" Jan 27 10:07:40 crc kubenswrapper[4869]: E0127 10:07:40.677064 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bjt9d" podUID="24283d6a-6945-4ce8-991e-25102b2a0bea" Jan 27 10:07:40 crc kubenswrapper[4869]: E0127 10:07:40.677068 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pvgqp" podUID="5a1cd9b4-00f9-430f-8857-718672e03003" Jan 27 10:07:40 crc kubenswrapper[4869]: E0127 10:07:40.677256 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-dgc6k" podUID="61deff8e-df98-4cc4-86de-f60d12c8cfb9" Jan 27 10:07:40 crc kubenswrapper[4869]: E0127 10:07:40.677559 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kph4d" podUID="95e36175-15e3-4f1f-8063-5f3bade317b6" Jan 27 10:07:40 crc kubenswrapper[4869]: E0127 10:07:40.679022 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-6xdnp" podUID="5b08b641-c912-4e41-911c-6d46e9d589c9" Jan 27 10:07:40 crc kubenswrapper[4869]: I0127 10:07:40.972313 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert\") pod \"infra-operator-controller-manager-7f6fb95f66-4xhrc\" (UID: \"dffa1b35-d981-4c5c-8df0-341e6a5941a6\") " pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" Jan 27 10:07:40 crc kubenswrapper[4869]: E0127 10:07:40.972468 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 10:07:40 crc kubenswrapper[4869]: E0127 10:07:40.972537 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert podName:dffa1b35-d981-4c5c-8df0-341e6a5941a6 nodeName:}" failed. No retries permitted until 2026-01-27 10:07:44.972520295 +0000 UTC m=+833.592944378 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert") pod "infra-operator-controller-manager-7f6fb95f66-4xhrc" (UID: "dffa1b35-d981-4c5c-8df0-341e6a5941a6") : secret "infra-operator-webhook-server-cert" not found Jan 27 10:07:41 crc kubenswrapper[4869]: I0127 10:07:41.281770 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2\" (UID: \"facb1993-c676-4104-9090-8f8b4d8576ed\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" Jan 27 10:07:41 crc kubenswrapper[4869]: E0127 10:07:41.282025 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 10:07:41 crc kubenswrapper[4869]: E0127 10:07:41.282072 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert podName:facb1993-c676-4104-9090-8f8b4d8576ed nodeName:}" failed. No retries permitted until 2026-01-27 10:07:45.282058838 +0000 UTC m=+833.902482921 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" (UID: "facb1993-c676-4104-9090-8f8b4d8576ed") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 10:07:41 crc kubenswrapper[4869]: I0127 10:07:41.991757 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:41 crc kubenswrapper[4869]: I0127 10:07:41.992117 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:41 crc kubenswrapper[4869]: E0127 10:07:41.992051 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 10:07:41 crc kubenswrapper[4869]: E0127 10:07:41.992309 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs podName:6c54aaba-e55e-4168-9078-0de1b3f7e7fe nodeName:}" failed. No retries permitted until 2026-01-27 10:07:45.992296227 +0000 UTC m=+834.612720310 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs") pod "openstack-operator-controller-manager-7db7c99649-zbtgz" (UID: "6c54aaba-e55e-4168-9078-0de1b3f7e7fe") : secret "webhook-server-cert" not found Jan 27 10:07:41 crc kubenswrapper[4869]: E0127 10:07:41.992261 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 10:07:41 crc kubenswrapper[4869]: E0127 10:07:41.992626 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs podName:6c54aaba-e55e-4168-9078-0de1b3f7e7fe nodeName:}" failed. No retries permitted until 2026-01-27 10:07:45.992618388 +0000 UTC m=+834.613042471 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs") pod "openstack-operator-controller-manager-7db7c99649-zbtgz" (UID: "6c54aaba-e55e-4168-9078-0de1b3f7e7fe") : secret "metrics-server-cert" not found Jan 27 10:07:45 crc kubenswrapper[4869]: I0127 10:07:45.035615 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert\") pod \"infra-operator-controller-manager-7f6fb95f66-4xhrc\" (UID: \"dffa1b35-d981-4c5c-8df0-341e6a5941a6\") " pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" Jan 27 10:07:45 crc kubenswrapper[4869]: E0127 10:07:45.035808 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 10:07:45 crc kubenswrapper[4869]: E0127 10:07:45.036131 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert podName:dffa1b35-d981-4c5c-8df0-341e6a5941a6 nodeName:}" failed. No retries permitted until 2026-01-27 10:07:53.036088474 +0000 UTC m=+841.656512557 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert") pod "infra-operator-controller-manager-7f6fb95f66-4xhrc" (UID: "dffa1b35-d981-4c5c-8df0-341e6a5941a6") : secret "infra-operator-webhook-server-cert" not found Jan 27 10:07:45 crc kubenswrapper[4869]: I0127 10:07:45.340646 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2\" (UID: \"facb1993-c676-4104-9090-8f8b4d8576ed\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" Jan 27 10:07:45 crc kubenswrapper[4869]: E0127 10:07:45.340792 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 10:07:45 crc kubenswrapper[4869]: E0127 10:07:45.340866 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert podName:facb1993-c676-4104-9090-8f8b4d8576ed nodeName:}" failed. No retries permitted until 2026-01-27 10:07:53.34084837 +0000 UTC m=+841.961272453 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" (UID: "facb1993-c676-4104-9090-8f8b4d8576ed") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 10:07:46 crc kubenswrapper[4869]: I0127 10:07:46.049776 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:46 crc kubenswrapper[4869]: I0127 10:07:46.049894 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:46 crc kubenswrapper[4869]: E0127 10:07:46.049954 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 10:07:46 crc kubenswrapper[4869]: E0127 10:07:46.050014 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs podName:6c54aaba-e55e-4168-9078-0de1b3f7e7fe nodeName:}" failed. No retries permitted until 2026-01-27 10:07:54.049996224 +0000 UTC m=+842.670420307 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs") pod "openstack-operator-controller-manager-7db7c99649-zbtgz" (UID: "6c54aaba-e55e-4168-9078-0de1b3f7e7fe") : secret "webhook-server-cert" not found Jan 27 10:07:46 crc kubenswrapper[4869]: E0127 10:07:46.050064 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 10:07:46 crc kubenswrapper[4869]: E0127 10:07:46.050102 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs podName:6c54aaba-e55e-4168-9078-0de1b3f7e7fe nodeName:}" failed. No retries permitted until 2026-01-27 10:07:54.050091217 +0000 UTC m=+842.670515300 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs") pod "openstack-operator-controller-manager-7db7c99649-zbtgz" (UID: "6c54aaba-e55e-4168-9078-0de1b3f7e7fe") : secret "metrics-server-cert" not found Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.735469 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rscb2" event={"ID":"50c0b859-fc98-4727-a1e2-cd0397e17bb7","Type":"ContainerStarted","Data":"370e23e62e947f7689b87abc8e993583529b6426492e9df57e40555349f402d5"} Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.736677 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rscb2" Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.738154 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7p2lg" event={"ID":"b8e9717b-6786-4882-99ae-bbcaa887e310","Type":"ContainerStarted","Data":"bec4f521067fd8554da3f42b787b3d468538eb3a8f39816fb9bac0a330cf384d"} Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.738258 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7p2lg" Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.739651 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2l8" event={"ID":"c2bdaaef-bf80-4141-9d4f-d0942aa15e4e","Type":"ContainerStarted","Data":"71ecf3bda216edd0a0c4c6e074b429a423fc38f4eeb06624c86a954c3c2c8be0"} Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.740132 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2l8" Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.741333 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gbl72" event={"ID":"22116ec0-0e77-4752-b374-ad20f73dc3f4","Type":"ContainerStarted","Data":"9a51808b8b63dafd577f639a2f56c9647f7ce18370a22240d5eb8f960f25b439"} Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.741465 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gbl72" Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.743094 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xnt8t" event={"ID":"33da9d5c-c09e-492d-b23d-6cc5ceaef8b9","Type":"ContainerStarted","Data":"5008165f5cad9cd73534858714ac170ea803b118e6d9bb0f6986ebeb9f77206f"} Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.743190 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xnt8t" Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.745056 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g799f" event={"ID":"fa54c4d9-8d7b-4284-bb64-d21d21e9a83e","Type":"ContainerStarted","Data":"7f2ef4ec87a1d905611ef50a24d2127994958a689524a41f4d8577368f4eba9f"} Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.745109 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g799f" Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.746960 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-h6pnx" event={"ID":"3f283669-e4aa-48ca-b487-c1f34759f97a","Type":"ContainerStarted","Data":"188ae0e364f73fc43e47da6b8a7357f258c88e7f7996f4d1f8497d01b9559eaa"} Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.747373 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-h6pnx" Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.748717 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-tjf5f" event={"ID":"96f59ef6-bb4a-453d-9de2-ba5e0933df0a","Type":"ContainerStarted","Data":"45893060a16ceec017677cd91e1b4e318d2547f979e9a6dca1ef9d86b0c3b960"} Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.749002 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-tjf5f" Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.750611 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g5vdg" event={"ID":"60bb147d-e703-4ac4-8068-aa416605b7b5","Type":"ContainerStarted","Data":"96499f6b6fd3553f0ad7d0d99648bb77728b8ba3095e0667739a3ce6d91ed668"} Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.750731 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g5vdg" Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.752075 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7tzv4" event={"ID":"cfdd145e-d7b8-4078-aaa6-9b9827749b9a","Type":"ContainerStarted","Data":"57bde1072fd9ec90d0ecaabcf5ebe5860ee16c0c676a123e76c770aa803396ad"} Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.752148 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7tzv4" Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.753759 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8bvfh" event={"ID":"a3cdb036-7094-48e3-9d3d-8699ece77b88","Type":"ContainerStarted","Data":"14a0543653336a7698b9bdb4f681a6e69a69013771dbbf3cd33a45ec6675f087"} Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.753878 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8bvfh" Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.755202 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rhfb" event={"ID":"1e946d3d-37fb-4bb6-8c8f-b7dcba782889","Type":"ContainerStarted","Data":"ec79fcaa18628d9ec3aff91262979fd8237676d0a6ca5ff9d982354abc3eea4e"} Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.755246 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rhfb" Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.757265 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dj764" event={"ID":"46e73157-89a7-4ca4-b71f-2f2e05181ea1","Type":"ContainerStarted","Data":"65a4c669cdb31ce286f5e98dd0f984ddaf15086c0dacb954c3c66829a618021b"} Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.757335 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dj764" Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.759161 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-g2cx7" event={"ID":"e1367e9a-318d-4800-926b-e0fe5cadf9b7","Type":"ContainerStarted","Data":"32a9d9cf2b8b912e24944c1fdda116b7ccdf830b990dcc09302d3122c08d680b"} Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.759312 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-g2cx7" Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.771526 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rscb2" podStartSLOduration=2.8750493710000002 podStartE2EDuration="13.771506914s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:39.017651041 +0000 UTC m=+827.638075124" lastFinishedPulling="2026-01-27 10:07:49.914108584 +0000 UTC m=+838.534532667" observedRunningTime="2026-01-27 10:07:50.768092233 +0000 UTC m=+839.388516316" watchObservedRunningTime="2026-01-27 10:07:50.771506914 +0000 UTC m=+839.391930997" Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.797581 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7tzv4" podStartSLOduration=2.092045353 podStartE2EDuration="13.79756696s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:38.202023071 +0000 UTC m=+826.822447154" lastFinishedPulling="2026-01-27 10:07:49.907544678 +0000 UTC m=+838.527968761" observedRunningTime="2026-01-27 10:07:50.79421568 +0000 UTC m=+839.414639753" watchObservedRunningTime="2026-01-27 10:07:50.79756696 +0000 UTC m=+839.417991043" Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.878588 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-tjf5f" podStartSLOduration=2.7905436679999998 podStartE2EDuration="13.878564459s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:38.818358368 +0000 UTC m=+827.438782451" lastFinishedPulling="2026-01-27 10:07:49.906379159 +0000 UTC m=+838.526803242" observedRunningTime="2026-01-27 10:07:50.843059154 +0000 UTC m=+839.463483237" watchObservedRunningTime="2026-01-27 10:07:50.878564459 +0000 UTC m=+839.498988542" Jan 27 10:07:50 crc kubenswrapper[4869]: I0127 10:07:50.956284 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2l8" podStartSLOduration=3.261844841 podStartE2EDuration="13.95626978s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:39.205460088 +0000 UTC m=+827.825884171" lastFinishedPulling="2026-01-27 10:07:49.899885027 +0000 UTC m=+838.520309110" observedRunningTime="2026-01-27 10:07:50.95168776 +0000 UTC m=+839.572111843" watchObservedRunningTime="2026-01-27 10:07:50.95626978 +0000 UTC m=+839.576693863" Jan 27 10:07:51 crc kubenswrapper[4869]: I0127 10:07:51.096073 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7p2lg" podStartSLOduration=2.357699316 podStartE2EDuration="14.096057041s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:38.1614604 +0000 UTC m=+826.781884483" lastFinishedPulling="2026-01-27 10:07:49.899818125 +0000 UTC m=+838.520242208" observedRunningTime="2026-01-27 10:07:51.011101151 +0000 UTC m=+839.631525234" watchObservedRunningTime="2026-01-27 10:07:51.096057041 +0000 UTC m=+839.716481114" Jan 27 10:07:51 crc kubenswrapper[4869]: I0127 10:07:51.163021 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xnt8t" podStartSLOduration=3.213470814 podStartE2EDuration="14.163006399s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:39.017946871 +0000 UTC m=+827.638370954" lastFinishedPulling="2026-01-27 10:07:49.967482456 +0000 UTC m=+838.587906539" observedRunningTime="2026-01-27 10:07:51.093806176 +0000 UTC m=+839.714230259" watchObservedRunningTime="2026-01-27 10:07:51.163006399 +0000 UTC m=+839.783430482" Jan 27 10:07:51 crc kubenswrapper[4869]: I0127 10:07:51.166273 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rhfb" podStartSLOduration=3.103402229 podStartE2EDuration="14.166259575s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:38.842439608 +0000 UTC m=+827.462863701" lastFinishedPulling="2026-01-27 10:07:49.905296964 +0000 UTC m=+838.525721047" observedRunningTime="2026-01-27 10:07:51.160953211 +0000 UTC m=+839.781377294" watchObservedRunningTime="2026-01-27 10:07:51.166259575 +0000 UTC m=+839.786683648" Jan 27 10:07:51 crc kubenswrapper[4869]: I0127 10:07:51.212050 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-g2cx7" podStartSLOduration=3.329902276 podStartE2EDuration="14.212020478s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:39.023942178 +0000 UTC m=+827.644366261" lastFinishedPulling="2026-01-27 10:07:49.90606038 +0000 UTC m=+838.526484463" observedRunningTime="2026-01-27 10:07:51.20721962 +0000 UTC m=+839.827643703" watchObservedRunningTime="2026-01-27 10:07:51.212020478 +0000 UTC m=+839.832444561" Jan 27 10:07:51 crc kubenswrapper[4869]: I0127 10:07:51.245700 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g5vdg" podStartSLOduration=3.161570069 podStartE2EDuration="14.245685423s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:38.821539812 +0000 UTC m=+827.441963895" lastFinishedPulling="2026-01-27 10:07:49.905655166 +0000 UTC m=+838.526079249" observedRunningTime="2026-01-27 10:07:51.244662549 +0000 UTC m=+839.865086632" watchObservedRunningTime="2026-01-27 10:07:51.245685423 +0000 UTC m=+839.866109506" Jan 27 10:07:51 crc kubenswrapper[4869]: I0127 10:07:51.301151 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dj764" podStartSLOduration=3.198980628 podStartE2EDuration="14.301133603s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:38.804163132 +0000 UTC m=+827.424587215" lastFinishedPulling="2026-01-27 10:07:49.906316107 +0000 UTC m=+838.526740190" observedRunningTime="2026-01-27 10:07:51.29948263 +0000 UTC m=+839.919906713" watchObservedRunningTime="2026-01-27 10:07:51.301133603 +0000 UTC m=+839.921557686" Jan 27 10:07:51 crc kubenswrapper[4869]: I0127 10:07:51.303755 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-h6pnx" podStartSLOduration=3.382879875 podStartE2EDuration="14.303748109s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:38.997131007 +0000 UTC m=+827.617555090" lastFinishedPulling="2026-01-27 10:07:49.917999221 +0000 UTC m=+838.538423324" observedRunningTime="2026-01-27 10:07:51.278886433 +0000 UTC m=+839.899310516" watchObservedRunningTime="2026-01-27 10:07:51.303748109 +0000 UTC m=+839.924172192" Jan 27 10:07:51 crc kubenswrapper[4869]: I0127 10:07:51.335410 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gbl72" podStartSLOduration=3.274707893 podStartE2EDuration="14.335396769s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:38.845164967 +0000 UTC m=+827.465589060" lastFinishedPulling="2026-01-27 10:07:49.905853863 +0000 UTC m=+838.526277936" observedRunningTime="2026-01-27 10:07:51.331755149 +0000 UTC m=+839.952179232" watchObservedRunningTime="2026-01-27 10:07:51.335396769 +0000 UTC m=+839.955820852" Jan 27 10:07:51 crc kubenswrapper[4869]: I0127 10:07:51.370779 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8bvfh" podStartSLOduration=3.364075018 podStartE2EDuration="14.370759339s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:38.899984618 +0000 UTC m=+827.520408701" lastFinishedPulling="2026-01-27 10:07:49.906668939 +0000 UTC m=+838.527093022" observedRunningTime="2026-01-27 10:07:51.367894655 +0000 UTC m=+839.988318738" watchObservedRunningTime="2026-01-27 10:07:51.370759339 +0000 UTC m=+839.991183422" Jan 27 10:07:51 crc kubenswrapper[4869]: I0127 10:07:51.411278 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g799f" podStartSLOduration=3.410189702 podStartE2EDuration="14.41125747s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:38.898752087 +0000 UTC m=+827.519176170" lastFinishedPulling="2026-01-27 10:07:49.899819865 +0000 UTC m=+838.520243938" observedRunningTime="2026-01-27 10:07:51.409238844 +0000 UTC m=+840.029662927" watchObservedRunningTime="2026-01-27 10:07:51.41125747 +0000 UTC m=+840.031681553" Jan 27 10:07:53 crc kubenswrapper[4869]: I0127 10:07:53.090112 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert\") pod \"infra-operator-controller-manager-7f6fb95f66-4xhrc\" (UID: \"dffa1b35-d981-4c5c-8df0-341e6a5941a6\") " pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" Jan 27 10:07:53 crc kubenswrapper[4869]: I0127 10:07:53.096245 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dffa1b35-d981-4c5c-8df0-341e6a5941a6-cert\") pod \"infra-operator-controller-manager-7f6fb95f66-4xhrc\" (UID: \"dffa1b35-d981-4c5c-8df0-341e6a5941a6\") " pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" Jan 27 10:07:53 crc kubenswrapper[4869]: I0127 10:07:53.169347 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-q2ssx" Jan 27 10:07:53 crc kubenswrapper[4869]: I0127 10:07:53.178471 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" Jan 27 10:07:53 crc kubenswrapper[4869]: E0127 10:07:53.395520 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 10:07:53 crc kubenswrapper[4869]: E0127 10:07:53.395595 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert podName:facb1993-c676-4104-9090-8f8b4d8576ed nodeName:}" failed. No retries permitted until 2026-01-27 10:08:09.39557975 +0000 UTC m=+858.016003833 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" (UID: "facb1993-c676-4104-9090-8f8b4d8576ed") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 10:07:53 crc kubenswrapper[4869]: I0127 10:07:53.395910 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2\" (UID: \"facb1993-c676-4104-9090-8f8b4d8576ed\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" Jan 27 10:07:54 crc kubenswrapper[4869]: I0127 10:07:54.104460 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:54 crc kubenswrapper[4869]: I0127 10:07:54.104623 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:54 crc kubenswrapper[4869]: I0127 10:07:54.116001 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-metrics-certs\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:54 crc kubenswrapper[4869]: I0127 10:07:54.119177 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6c54aaba-e55e-4168-9078-0de1b3f7e7fe-webhook-certs\") pod \"openstack-operator-controller-manager-7db7c99649-zbtgz\" (UID: \"6c54aaba-e55e-4168-9078-0de1b3f7e7fe\") " pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:54 crc kubenswrapper[4869]: I0127 10:07:54.134100 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-b6s8l" Jan 27 10:07:54 crc kubenswrapper[4869]: I0127 10:07:54.143089 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:56 crc kubenswrapper[4869]: I0127 10:07:56.384490 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc"] Jan 27 10:07:56 crc kubenswrapper[4869]: I0127 10:07:56.405217 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz"] Jan 27 10:07:56 crc kubenswrapper[4869]: W0127 10:07:56.431164 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c54aaba_e55e_4168_9078_0de1b3f7e7fe.slice/crio-2ce2bbc12dff93054d6be66cc3433ebeee163f5afde011c139881ef74ff4bf61 WatchSource:0}: Error finding container 2ce2bbc12dff93054d6be66cc3433ebeee163f5afde011c139881ef74ff4bf61: Status 404 returned error can't find the container with id 2ce2bbc12dff93054d6be66cc3433ebeee163f5afde011c139881ef74ff4bf61 Jan 27 10:07:56 crc kubenswrapper[4869]: I0127 10:07:56.797540 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pvgqp" event={"ID":"5a1cd9b4-00f9-430f-8857-718672e03003","Type":"ContainerStarted","Data":"1a43e3100360ee6d3ed8059a391d6a4adf60510eec0ae0a0f5114ccbc855d4a3"} Jan 27 10:07:56 crc kubenswrapper[4869]: I0127 10:07:56.798609 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pvgqp" Jan 27 10:07:56 crc kubenswrapper[4869]: I0127 10:07:56.799710 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" event={"ID":"dffa1b35-d981-4c5c-8df0-341e6a5941a6","Type":"ContainerStarted","Data":"555e805583c68d91b5e12ccbc9bf25a88b3f88f658c45c577819a5d0bc078fe4"} Jan 27 10:07:56 crc kubenswrapper[4869]: I0127 10:07:56.801226 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" event={"ID":"6c54aaba-e55e-4168-9078-0de1b3f7e7fe","Type":"ContainerStarted","Data":"8a18e2ff83005be360592834b89e50a30f5b877d5109ee7d2d8b6d9636225833"} Jan 27 10:07:56 crc kubenswrapper[4869]: I0127 10:07:56.801275 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" event={"ID":"6c54aaba-e55e-4168-9078-0de1b3f7e7fe","Type":"ContainerStarted","Data":"2ce2bbc12dff93054d6be66cc3433ebeee163f5afde011c139881ef74ff4bf61"} Jan 27 10:07:56 crc kubenswrapper[4869]: I0127 10:07:56.801388 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:07:56 crc kubenswrapper[4869]: I0127 10:07:56.802997 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kph4d" event={"ID":"95e36175-15e3-4f1f-8063-5f3bade317b6","Type":"ContainerStarted","Data":"4d4359620f3a6e722ce943ee4e0df3022378ee379fbefc6f5c5084db99451558"} Jan 27 10:07:56 crc kubenswrapper[4869]: I0127 10:07:56.803195 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kph4d" Jan 27 10:07:56 crc kubenswrapper[4869]: I0127 10:07:56.821486 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pvgqp" podStartSLOduration=3.050325047 podStartE2EDuration="19.821464576s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:39.182522514 +0000 UTC m=+827.802946597" lastFinishedPulling="2026-01-27 10:07:55.953662043 +0000 UTC m=+844.574086126" observedRunningTime="2026-01-27 10:07:56.808985249 +0000 UTC m=+845.429409342" watchObservedRunningTime="2026-01-27 10:07:56.821464576 +0000 UTC m=+845.441888659" Jan 27 10:07:56 crc kubenswrapper[4869]: I0127 10:07:56.829197 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kph4d" podStartSLOduration=3.074061607 podStartE2EDuration="19.82917929s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:39.194393904 +0000 UTC m=+827.814817987" lastFinishedPulling="2026-01-27 10:07:55.949511577 +0000 UTC m=+844.569935670" observedRunningTime="2026-01-27 10:07:56.82432363 +0000 UTC m=+845.444747703" watchObservedRunningTime="2026-01-27 10:07:56.82917929 +0000 UTC m=+845.449603373" Jan 27 10:07:56 crc kubenswrapper[4869]: I0127 10:07:56.852436 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" podStartSLOduration=19.852418869 podStartE2EDuration="19.852418869s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:07:56.850771815 +0000 UTC m=+845.471195898" watchObservedRunningTime="2026-01-27 10:07:56.852418869 +0000 UTC m=+845.472842952" Jan 27 10:07:57 crc kubenswrapper[4869]: I0127 10:07:57.408570 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-7p2lg" Jan 27 10:07:57 crc kubenswrapper[4869]: I0127 10:07:57.432876 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7tzv4" Jan 27 10:07:57 crc kubenswrapper[4869]: I0127 10:07:57.452104 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-g5vdg" Jan 27 10:07:57 crc kubenswrapper[4869]: I0127 10:07:57.500532 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-gbl72" Jan 27 10:07:57 crc kubenswrapper[4869]: I0127 10:07:57.529441 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-tjf5f" Jan 27 10:07:57 crc kubenswrapper[4869]: I0127 10:07:57.572674 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rhfb" Jan 27 10:07:57 crc kubenswrapper[4869]: I0127 10:07:57.799761 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-g799f" Jan 27 10:07:57 crc kubenswrapper[4869]: I0127 10:07:57.800106 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-8bvfh" Jan 27 10:07:57 crc kubenswrapper[4869]: I0127 10:07:57.835695 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dj764" Jan 27 10:07:57 crc kubenswrapper[4869]: I0127 10:07:57.862292 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-xnt8t" Jan 27 10:07:57 crc kubenswrapper[4869]: I0127 10:07:57.947426 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-h6pnx" Jan 27 10:07:58 crc kubenswrapper[4869]: I0127 10:07:58.066012 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-rscb2" Jan 27 10:07:58 crc kubenswrapper[4869]: I0127 10:07:58.185359 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-g2cx7" Jan 27 10:07:58 crc kubenswrapper[4869]: I0127 10:07:58.248434 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-hj2l8" Jan 27 10:08:01 crc kubenswrapper[4869]: I0127 10:08:01.840352 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-dgc6k" event={"ID":"61deff8e-df98-4cc4-86de-f60d12c8cfb9","Type":"ContainerStarted","Data":"83657d6437df82f425b3d72454c1e5f8a51be088242843c83db9d43c24d0c446"} Jan 27 10:08:01 crc kubenswrapper[4869]: I0127 10:08:01.841284 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-dgc6k" Jan 27 10:08:01 crc kubenswrapper[4869]: I0127 10:08:01.842811 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bjt9d" event={"ID":"24283d6a-6945-4ce8-991e-25102b2a0bea","Type":"ContainerStarted","Data":"f0dad887f420ff36875ac721fb812b4d29ec371737a550646608b4630d3693bd"} Jan 27 10:08:01 crc kubenswrapper[4869]: I0127 10:08:01.843233 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bjt9d" Jan 27 10:08:01 crc kubenswrapper[4869]: I0127 10:08:01.845529 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" event={"ID":"dffa1b35-d981-4c5c-8df0-341e6a5941a6","Type":"ContainerStarted","Data":"dc46321a6b4b69b0f2c64f871359d2c6759fca762e04fbb62124316b3e849757"} Jan 27 10:08:01 crc kubenswrapper[4869]: I0127 10:08:01.846020 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" Jan 27 10:08:01 crc kubenswrapper[4869]: I0127 10:08:01.847969 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-6xdnp" event={"ID":"5b08b641-c912-4e41-911c-6d46e9d589c9","Type":"ContainerStarted","Data":"b180a26e3db9d4802ea55048f467036e483d1435efcf4daf0147defb07fdaf58"} Jan 27 10:08:01 crc kubenswrapper[4869]: I0127 10:08:01.848501 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-6xdnp" Jan 27 10:08:01 crc kubenswrapper[4869]: I0127 10:08:01.849942 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2vjxv" event={"ID":"bbc5d8d8-48d4-4b4f-96ee-87e21cf68ed2","Type":"ContainerStarted","Data":"337052068f8d267c4c85321e97a540f2ad3e4be879991d5642fe3c115a799102"} Jan 27 10:08:01 crc kubenswrapper[4869]: I0127 10:08:01.865178 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-dgc6k" podStartSLOduration=2.629200149 podStartE2EDuration="24.865161875s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:39.220432839 +0000 UTC m=+827.840856922" lastFinishedPulling="2026-01-27 10:08:01.456394555 +0000 UTC m=+850.076818648" observedRunningTime="2026-01-27 10:08:01.859668115 +0000 UTC m=+850.480092198" watchObservedRunningTime="2026-01-27 10:08:01.865161875 +0000 UTC m=+850.485585958" Jan 27 10:08:01 crc kubenswrapper[4869]: I0127 10:08:01.882041 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" podStartSLOduration=19.851492449 podStartE2EDuration="24.882020906s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:56.425867008 +0000 UTC m=+845.046291091" lastFinishedPulling="2026-01-27 10:08:01.456395465 +0000 UTC m=+850.076819548" observedRunningTime="2026-01-27 10:08:01.878042746 +0000 UTC m=+850.498466839" watchObservedRunningTime="2026-01-27 10:08:01.882020906 +0000 UTC m=+850.502444989" Jan 27 10:08:01 crc kubenswrapper[4869]: I0127 10:08:01.895724 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bjt9d" podStartSLOduration=2.476106489 podStartE2EDuration="24.895709084s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:39.035499327 +0000 UTC m=+827.655923410" lastFinishedPulling="2026-01-27 10:08:01.455101902 +0000 UTC m=+850.075526005" observedRunningTime="2026-01-27 10:08:01.893302665 +0000 UTC m=+850.513726748" watchObservedRunningTime="2026-01-27 10:08:01.895709084 +0000 UTC m=+850.516133167" Jan 27 10:08:01 crc kubenswrapper[4869]: I0127 10:08:01.907400 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2vjxv" podStartSLOduration=1.658346233 podStartE2EDuration="23.907380726s" podCreationTimestamp="2026-01-27 10:07:38 +0000 UTC" firstStartedPulling="2026-01-27 10:07:39.232272897 +0000 UTC m=+827.852696980" lastFinishedPulling="2026-01-27 10:08:01.48130739 +0000 UTC m=+850.101731473" observedRunningTime="2026-01-27 10:08:01.905644499 +0000 UTC m=+850.526068592" watchObservedRunningTime="2026-01-27 10:08:01.907380726 +0000 UTC m=+850.527804809" Jan 27 10:08:01 crc kubenswrapper[4869]: I0127 10:08:01.925049 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-6xdnp" podStartSLOduration=2.503251515 podStartE2EDuration="24.925027282s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:07:39.035306481 +0000 UTC m=+827.655730564" lastFinishedPulling="2026-01-27 10:08:01.457082248 +0000 UTC m=+850.077506331" observedRunningTime="2026-01-27 10:08:01.923438091 +0000 UTC m=+850.543862174" watchObservedRunningTime="2026-01-27 10:08:01.925027282 +0000 UTC m=+850.545451365" Jan 27 10:08:04 crc kubenswrapper[4869]: I0127 10:08:04.149376 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7db7c99649-zbtgz" Jan 27 10:08:07 crc kubenswrapper[4869]: I0127 10:08:07.946755 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-pvgqp" Jan 27 10:08:07 crc kubenswrapper[4869]: I0127 10:08:07.969667 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-kph4d" Jan 27 10:08:08 crc kubenswrapper[4869]: I0127 10:08:08.018622 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-bjt9d" Jan 27 10:08:08 crc kubenswrapper[4869]: I0127 10:08:08.201508 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-6xdnp" Jan 27 10:08:08 crc kubenswrapper[4869]: I0127 10:08:08.323149 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-dgc6k" Jan 27 10:08:09 crc kubenswrapper[4869]: I0127 10:08:09.434718 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2\" (UID: \"facb1993-c676-4104-9090-8f8b4d8576ed\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" Jan 27 10:08:09 crc kubenswrapper[4869]: I0127 10:08:09.443728 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/facb1993-c676-4104-9090-8f8b4d8576ed-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2\" (UID: \"facb1993-c676-4104-9090-8f8b4d8576ed\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" Jan 27 10:08:09 crc kubenswrapper[4869]: I0127 10:08:09.484705 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-4wh9x" Jan 27 10:08:09 crc kubenswrapper[4869]: I0127 10:08:09.489499 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" Jan 27 10:08:09 crc kubenswrapper[4869]: I0127 10:08:09.765302 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2"] Jan 27 10:08:09 crc kubenswrapper[4869]: I0127 10:08:09.902173 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" event={"ID":"facb1993-c676-4104-9090-8f8b4d8576ed","Type":"ContainerStarted","Data":"cd58fa1ba3c7693277056fa9cca466b78ea415691f2bf5b23ad610d45dd8cb19"} Jan 27 10:08:13 crc kubenswrapper[4869]: I0127 10:08:13.185226 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7f6fb95f66-4xhrc" Jan 27 10:08:15 crc kubenswrapper[4869]: I0127 10:08:15.940449 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" event={"ID":"facb1993-c676-4104-9090-8f8b4d8576ed","Type":"ContainerStarted","Data":"fe2f48d3be41dbd5323b87542c2167154a07234797efa4b83f5bef8cc264688c"} Jan 27 10:08:15 crc kubenswrapper[4869]: I0127 10:08:15.941034 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" Jan 27 10:08:15 crc kubenswrapper[4869]: I0127 10:08:15.971972 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" podStartSLOduration=33.880663422 podStartE2EDuration="38.971958158s" podCreationTimestamp="2026-01-27 10:07:37 +0000 UTC" firstStartedPulling="2026-01-27 10:08:09.774175181 +0000 UTC m=+858.394599264" lastFinishedPulling="2026-01-27 10:08:14.865469917 +0000 UTC m=+863.485894000" observedRunningTime="2026-01-27 10:08:15.969769537 +0000 UTC m=+864.590193630" watchObservedRunningTime="2026-01-27 10:08:15.971958158 +0000 UTC m=+864.592382241" Jan 27 10:08:19 crc kubenswrapper[4869]: I0127 10:08:19.497032 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.236526 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-md25w"] Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.238616 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-md25w" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.243382 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.243715 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.244061 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.244210 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-688w8" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.247375 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-md25w"] Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.309953 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-96jkl"] Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.311244 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-96jkl" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.316313 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.321349 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-96jkl"] Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.402733 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf80245-78e1-4583-bf43-795cbf39ad22-config\") pod \"dnsmasq-dns-675f4bcbfc-md25w\" (UID: \"caf80245-78e1-4583-bf43-795cbf39ad22\") " pod="openstack/dnsmasq-dns-675f4bcbfc-md25w" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.402793 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc9kv\" (UniqueName: \"kubernetes.io/projected/caf80245-78e1-4583-bf43-795cbf39ad22-kube-api-access-wc9kv\") pod \"dnsmasq-dns-675f4bcbfc-md25w\" (UID: \"caf80245-78e1-4583-bf43-795cbf39ad22\") " pod="openstack/dnsmasq-dns-675f4bcbfc-md25w" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.503671 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvbwx\" (UniqueName: \"kubernetes.io/projected/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-kube-api-access-vvbwx\") pod \"dnsmasq-dns-78dd6ddcc-96jkl\" (UID: \"89faf77c-29ff-4a98-b4b6-41d4e3240ddb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96jkl" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.503722 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-96jkl\" (UID: \"89faf77c-29ff-4a98-b4b6-41d4e3240ddb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96jkl" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.503742 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-config\") pod \"dnsmasq-dns-78dd6ddcc-96jkl\" (UID: \"89faf77c-29ff-4a98-b4b6-41d4e3240ddb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96jkl" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.503765 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf80245-78e1-4583-bf43-795cbf39ad22-config\") pod \"dnsmasq-dns-675f4bcbfc-md25w\" (UID: \"caf80245-78e1-4583-bf43-795cbf39ad22\") " pod="openstack/dnsmasq-dns-675f4bcbfc-md25w" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.503788 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc9kv\" (UniqueName: \"kubernetes.io/projected/caf80245-78e1-4583-bf43-795cbf39ad22-kube-api-access-wc9kv\") pod \"dnsmasq-dns-675f4bcbfc-md25w\" (UID: \"caf80245-78e1-4583-bf43-795cbf39ad22\") " pod="openstack/dnsmasq-dns-675f4bcbfc-md25w" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.504961 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf80245-78e1-4583-bf43-795cbf39ad22-config\") pod \"dnsmasq-dns-675f4bcbfc-md25w\" (UID: \"caf80245-78e1-4583-bf43-795cbf39ad22\") " pod="openstack/dnsmasq-dns-675f4bcbfc-md25w" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.532863 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc9kv\" (UniqueName: \"kubernetes.io/projected/caf80245-78e1-4583-bf43-795cbf39ad22-kube-api-access-wc9kv\") pod \"dnsmasq-dns-675f4bcbfc-md25w\" (UID: \"caf80245-78e1-4583-bf43-795cbf39ad22\") " pod="openstack/dnsmasq-dns-675f4bcbfc-md25w" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.562718 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-md25w" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.604581 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvbwx\" (UniqueName: \"kubernetes.io/projected/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-kube-api-access-vvbwx\") pod \"dnsmasq-dns-78dd6ddcc-96jkl\" (UID: \"89faf77c-29ff-4a98-b4b6-41d4e3240ddb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96jkl" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.604627 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-96jkl\" (UID: \"89faf77c-29ff-4a98-b4b6-41d4e3240ddb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96jkl" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.604650 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-config\") pod \"dnsmasq-dns-78dd6ddcc-96jkl\" (UID: \"89faf77c-29ff-4a98-b4b6-41d4e3240ddb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96jkl" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.605796 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-config\") pod \"dnsmasq-dns-78dd6ddcc-96jkl\" (UID: \"89faf77c-29ff-4a98-b4b6-41d4e3240ddb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96jkl" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.608150 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-96jkl\" (UID: \"89faf77c-29ff-4a98-b4b6-41d4e3240ddb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96jkl" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.627377 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvbwx\" (UniqueName: \"kubernetes.io/projected/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-kube-api-access-vvbwx\") pod \"dnsmasq-dns-78dd6ddcc-96jkl\" (UID: \"89faf77c-29ff-4a98-b4b6-41d4e3240ddb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-96jkl" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.636971 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-96jkl" Jan 27 10:08:35 crc kubenswrapper[4869]: I0127 10:08:35.988126 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-md25w"] Jan 27 10:08:35 crc kubenswrapper[4869]: W0127 10:08:35.994579 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcaf80245_78e1_4583_bf43_795cbf39ad22.slice/crio-a5bbd4f14ae8cef30874dcbbda44d03fc7f996c5ee25b065dd5d38f9bd83831d WatchSource:0}: Error finding container a5bbd4f14ae8cef30874dcbbda44d03fc7f996c5ee25b065dd5d38f9bd83831d: Status 404 returned error can't find the container with id a5bbd4f14ae8cef30874dcbbda44d03fc7f996c5ee25b065dd5d38f9bd83831d Jan 27 10:08:36 crc kubenswrapper[4869]: I0127 10:08:36.067871 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-96jkl"] Jan 27 10:08:36 crc kubenswrapper[4869]: W0127 10:08:36.071117 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89faf77c_29ff_4a98_b4b6_41d4e3240ddb.slice/crio-681b3bd00f9b2d5fb1b2e176e1c25b5d60c4f123b7a29227c0afb6a354695724 WatchSource:0}: Error finding container 681b3bd00f9b2d5fb1b2e176e1c25b5d60c4f123b7a29227c0afb6a354695724: Status 404 returned error can't find the container with id 681b3bd00f9b2d5fb1b2e176e1c25b5d60c4f123b7a29227c0afb6a354695724 Jan 27 10:08:36 crc kubenswrapper[4869]: I0127 10:08:36.112932 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-96jkl" event={"ID":"89faf77c-29ff-4a98-b4b6-41d4e3240ddb","Type":"ContainerStarted","Data":"681b3bd00f9b2d5fb1b2e176e1c25b5d60c4f123b7a29227c0afb6a354695724"} Jan 27 10:08:36 crc kubenswrapper[4869]: I0127 10:08:36.113804 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-md25w" event={"ID":"caf80245-78e1-4583-bf43-795cbf39ad22","Type":"ContainerStarted","Data":"a5bbd4f14ae8cef30874dcbbda44d03fc7f996c5ee25b065dd5d38f9bd83831d"} Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.157209 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-md25w"] Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.189422 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bckqb"] Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.190781 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bckqb" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.204084 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bckqb"] Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.240953 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prlmx\" (UniqueName: \"kubernetes.io/projected/c60cfa24-5bbd-427e-be0c-428900867c80-kube-api-access-prlmx\") pod \"dnsmasq-dns-666b6646f7-bckqb\" (UID: \"c60cfa24-5bbd-427e-be0c-428900867c80\") " pod="openstack/dnsmasq-dns-666b6646f7-bckqb" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.241004 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c60cfa24-5bbd-427e-be0c-428900867c80-config\") pod \"dnsmasq-dns-666b6646f7-bckqb\" (UID: \"c60cfa24-5bbd-427e-be0c-428900867c80\") " pod="openstack/dnsmasq-dns-666b6646f7-bckqb" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.241031 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c60cfa24-5bbd-427e-be0c-428900867c80-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bckqb\" (UID: \"c60cfa24-5bbd-427e-be0c-428900867c80\") " pod="openstack/dnsmasq-dns-666b6646f7-bckqb" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.341912 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prlmx\" (UniqueName: \"kubernetes.io/projected/c60cfa24-5bbd-427e-be0c-428900867c80-kube-api-access-prlmx\") pod \"dnsmasq-dns-666b6646f7-bckqb\" (UID: \"c60cfa24-5bbd-427e-be0c-428900867c80\") " pod="openstack/dnsmasq-dns-666b6646f7-bckqb" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.342020 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c60cfa24-5bbd-427e-be0c-428900867c80-config\") pod \"dnsmasq-dns-666b6646f7-bckqb\" (UID: \"c60cfa24-5bbd-427e-be0c-428900867c80\") " pod="openstack/dnsmasq-dns-666b6646f7-bckqb" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.342049 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c60cfa24-5bbd-427e-be0c-428900867c80-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bckqb\" (UID: \"c60cfa24-5bbd-427e-be0c-428900867c80\") " pod="openstack/dnsmasq-dns-666b6646f7-bckqb" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.343071 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c60cfa24-5bbd-427e-be0c-428900867c80-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bckqb\" (UID: \"c60cfa24-5bbd-427e-be0c-428900867c80\") " pod="openstack/dnsmasq-dns-666b6646f7-bckqb" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.343090 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c60cfa24-5bbd-427e-be0c-428900867c80-config\") pod \"dnsmasq-dns-666b6646f7-bckqb\" (UID: \"c60cfa24-5bbd-427e-be0c-428900867c80\") " pod="openstack/dnsmasq-dns-666b6646f7-bckqb" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.370927 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prlmx\" (UniqueName: \"kubernetes.io/projected/c60cfa24-5bbd-427e-be0c-428900867c80-kube-api-access-prlmx\") pod \"dnsmasq-dns-666b6646f7-bckqb\" (UID: \"c60cfa24-5bbd-427e-be0c-428900867c80\") " pod="openstack/dnsmasq-dns-666b6646f7-bckqb" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.428282 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-96jkl"] Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.461760 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vbsxr"] Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.462817 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.477385 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vbsxr"] Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.521091 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bckqb" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.646263 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6476143-8339-4837-8444-2bb4141d5da5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vbsxr\" (UID: \"f6476143-8339-4837-8444-2bb4141d5da5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.646360 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6476143-8339-4837-8444-2bb4141d5da5-config\") pod \"dnsmasq-dns-57d769cc4f-vbsxr\" (UID: \"f6476143-8339-4837-8444-2bb4141d5da5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.646380 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wjpd\" (UniqueName: \"kubernetes.io/projected/f6476143-8339-4837-8444-2bb4141d5da5-kube-api-access-8wjpd\") pod \"dnsmasq-dns-57d769cc4f-vbsxr\" (UID: \"f6476143-8339-4837-8444-2bb4141d5da5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.747892 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6476143-8339-4837-8444-2bb4141d5da5-config\") pod \"dnsmasq-dns-57d769cc4f-vbsxr\" (UID: \"f6476143-8339-4837-8444-2bb4141d5da5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.747940 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wjpd\" (UniqueName: \"kubernetes.io/projected/f6476143-8339-4837-8444-2bb4141d5da5-kube-api-access-8wjpd\") pod \"dnsmasq-dns-57d769cc4f-vbsxr\" (UID: \"f6476143-8339-4837-8444-2bb4141d5da5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.747975 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6476143-8339-4837-8444-2bb4141d5da5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vbsxr\" (UID: \"f6476143-8339-4837-8444-2bb4141d5da5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.749081 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6476143-8339-4837-8444-2bb4141d5da5-config\") pod \"dnsmasq-dns-57d769cc4f-vbsxr\" (UID: \"f6476143-8339-4837-8444-2bb4141d5da5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.749422 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6476143-8339-4837-8444-2bb4141d5da5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-vbsxr\" (UID: \"f6476143-8339-4837-8444-2bb4141d5da5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.780726 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wjpd\" (UniqueName: \"kubernetes.io/projected/f6476143-8339-4837-8444-2bb4141d5da5-kube-api-access-8wjpd\") pod \"dnsmasq-dns-57d769cc4f-vbsxr\" (UID: \"f6476143-8339-4837-8444-2bb4141d5da5\") " pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" Jan 27 10:08:38 crc kubenswrapper[4869]: I0127 10:08:38.782479 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.319793 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.321185 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.329643 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ldjmd" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.330243 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.330432 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.330596 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.331321 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.331429 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.331563 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.336007 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.881044 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.881087 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.881113 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-config-data\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.881135 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92rlx\" (UniqueName: \"kubernetes.io/projected/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-kube-api-access-92rlx\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.881161 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.881203 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.881230 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.881265 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.881314 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.881347 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.881399 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.913603 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.915114 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.918959 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.919267 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jwkgb" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.919377 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.919472 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.919563 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.919662 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.922721 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.944869 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.971486 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bckqb"] Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.984278 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.984542 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.984568 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2c6m\" (UniqueName: \"kubernetes.io/projected/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-kube-api-access-f2c6m\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.984595 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.984619 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.984643 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.984662 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.984682 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.988002 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.988050 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.988089 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.988215 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.988276 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.988314 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.988339 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-config-data\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.988362 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92rlx\" (UniqueName: \"kubernetes.io/projected/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-kube-api-access-92rlx\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.988392 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.988416 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.988428 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.988439 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.988514 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.988545 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.988572 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.989166 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.991501 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.992098 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.994863 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.996014 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-config-data\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.996392 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:39 crc kubenswrapper[4869]: I0127 10:08:39.998365 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.005907 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.016449 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92rlx\" (UniqueName: \"kubernetes.io/projected/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-kube-api-access-92rlx\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.030263 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/61608a46-7d70-4a1b-ac50-6238d5bf7ad9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.047437 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"61608a46-7d70-4a1b-ac50-6238d5bf7ad9\") " pod="openstack/rabbitmq-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.089686 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.089763 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.089777 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.089854 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.089888 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.089902 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2c6m\" (UniqueName: \"kubernetes.io/projected/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-kube-api-access-f2c6m\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.089942 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.090001 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.090019 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.090045 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.090067 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.090213 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.091488 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.092710 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.093226 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.095058 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.095101 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.102587 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.112720 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.117893 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.121868 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2c6m\" (UniqueName: \"kubernetes.io/projected/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-kube-api-access-f2c6m\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.126103 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.137296 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.260113 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.279135 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:08:40 crc kubenswrapper[4869]: I0127 10:08:40.321396 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vbsxr"] Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.337432 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.344855 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.347997 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.348190 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.348637 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.348740 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-nbpbs" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.349701 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.367078 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.425129 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9622ab05-c494-4c2b-b376-6f82ded8bdc5-kolla-config\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.425181 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9622ab05-c494-4c2b-b376-6f82ded8bdc5-config-data-default\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.425210 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.425242 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9622ab05-c494-4c2b-b376-6f82ded8bdc5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.425272 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r4wm\" (UniqueName: \"kubernetes.io/projected/9622ab05-c494-4c2b-b376-6f82ded8bdc5-kube-api-access-2r4wm\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.425424 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9622ab05-c494-4c2b-b376-6f82ded8bdc5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.425462 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9622ab05-c494-4c2b-b376-6f82ded8bdc5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.425486 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9622ab05-c494-4c2b-b376-6f82ded8bdc5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.528511 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9622ab05-c494-4c2b-b376-6f82ded8bdc5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.528558 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9622ab05-c494-4c2b-b376-6f82ded8bdc5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.528616 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9622ab05-c494-4c2b-b376-6f82ded8bdc5-kolla-config\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.528640 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9622ab05-c494-4c2b-b376-6f82ded8bdc5-config-data-default\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.528674 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.528715 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9622ab05-c494-4c2b-b376-6f82ded8bdc5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.528751 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r4wm\" (UniqueName: \"kubernetes.io/projected/9622ab05-c494-4c2b-b376-6f82ded8bdc5-kube-api-access-2r4wm\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.529103 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.529385 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9622ab05-c494-4c2b-b376-6f82ded8bdc5-kolla-config\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.529481 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9622ab05-c494-4c2b-b376-6f82ded8bdc5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.530051 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9622ab05-c494-4c2b-b376-6f82ded8bdc5-config-data-default\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.530611 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9622ab05-c494-4c2b-b376-6f82ded8bdc5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.530845 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9622ab05-c494-4c2b-b376-6f82ded8bdc5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.534272 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9622ab05-c494-4c2b-b376-6f82ded8bdc5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.543486 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9622ab05-c494-4c2b-b376-6f82ded8bdc5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.549656 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r4wm\" (UniqueName: \"kubernetes.io/projected/9622ab05-c494-4c2b-b376-6f82ded8bdc5-kube-api-access-2r4wm\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.556142 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"9622ab05-c494-4c2b-b376-6f82ded8bdc5\") " pod="openstack/openstack-galera-0" Jan 27 10:08:41 crc kubenswrapper[4869]: I0127 10:08:41.674402 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.119331 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.120698 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.126594 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.126852 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-bb9xl" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.129954 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.130536 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.132116 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.239962 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/518f6a90-a761-4aba-9740-c3aef7d8b0c4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.240007 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzjbx\" (UniqueName: \"kubernetes.io/projected/518f6a90-a761-4aba-9740-c3aef7d8b0c4-kube-api-access-vzjbx\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.240026 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.240054 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518f6a90-a761-4aba-9740-c3aef7d8b0c4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.240493 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/518f6a90-a761-4aba-9740-c3aef7d8b0c4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.240536 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/518f6a90-a761-4aba-9740-c3aef7d8b0c4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.240713 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/518f6a90-a761-4aba-9740-c3aef7d8b0c4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.240811 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/518f6a90-a761-4aba-9740-c3aef7d8b0c4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.342456 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/518f6a90-a761-4aba-9740-c3aef7d8b0c4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.342548 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/518f6a90-a761-4aba-9740-c3aef7d8b0c4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.342584 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzjbx\" (UniqueName: \"kubernetes.io/projected/518f6a90-a761-4aba-9740-c3aef7d8b0c4-kube-api-access-vzjbx\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.342611 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.342647 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518f6a90-a761-4aba-9740-c3aef7d8b0c4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.342702 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/518f6a90-a761-4aba-9740-c3aef7d8b0c4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.342724 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/518f6a90-a761-4aba-9740-c3aef7d8b0c4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.342772 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/518f6a90-a761-4aba-9740-c3aef7d8b0c4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.343578 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/518f6a90-a761-4aba-9740-c3aef7d8b0c4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.344244 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.348326 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/518f6a90-a761-4aba-9740-c3aef7d8b0c4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.348908 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/518f6a90-a761-4aba-9740-c3aef7d8b0c4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.349597 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518f6a90-a761-4aba-9740-c3aef7d8b0c4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.351404 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/518f6a90-a761-4aba-9740-c3aef7d8b0c4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.360318 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/518f6a90-a761-4aba-9740-c3aef7d8b0c4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.367501 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzjbx\" (UniqueName: \"kubernetes.io/projected/518f6a90-a761-4aba-9740-c3aef7d8b0c4-kube-api-access-vzjbx\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.375540 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"518f6a90-a761-4aba-9740-c3aef7d8b0c4\") " pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.442718 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.681352 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.682179 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.684308 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.684388 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-grdjh" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.685241 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.748810 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94bea268-7e39-4aeb-a45c-8008593eb45c-kolla-config\") pod \"memcached-0\" (UID: \"94bea268-7e39-4aeb-a45c-8008593eb45c\") " pod="openstack/memcached-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.748875 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94bea268-7e39-4aeb-a45c-8008593eb45c-config-data\") pod \"memcached-0\" (UID: \"94bea268-7e39-4aeb-a45c-8008593eb45c\") " pod="openstack/memcached-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.749101 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94bea268-7e39-4aeb-a45c-8008593eb45c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"94bea268-7e39-4aeb-a45c-8008593eb45c\") " pod="openstack/memcached-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.749139 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/94bea268-7e39-4aeb-a45c-8008593eb45c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"94bea268-7e39-4aeb-a45c-8008593eb45c\") " pod="openstack/memcached-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.749166 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9w4v\" (UniqueName: \"kubernetes.io/projected/94bea268-7e39-4aeb-a45c-8008593eb45c-kube-api-access-z9w4v\") pod \"memcached-0\" (UID: \"94bea268-7e39-4aeb-a45c-8008593eb45c\") " pod="openstack/memcached-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.755874 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 10:08:42 crc kubenswrapper[4869]: W0127 10:08:42.785946 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6476143_8339_4837_8444_2bb4141d5da5.slice/crio-252dea5c7a5dfc7c94234e2a61a817bd839f48dc3806b5101bd7c474766e414a WatchSource:0}: Error finding container 252dea5c7a5dfc7c94234e2a61a817bd839f48dc3806b5101bd7c474766e414a: Status 404 returned error can't find the container with id 252dea5c7a5dfc7c94234e2a61a817bd839f48dc3806b5101bd7c474766e414a Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.852550 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94bea268-7e39-4aeb-a45c-8008593eb45c-kolla-config\") pod \"memcached-0\" (UID: \"94bea268-7e39-4aeb-a45c-8008593eb45c\") " pod="openstack/memcached-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.852599 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94bea268-7e39-4aeb-a45c-8008593eb45c-config-data\") pod \"memcached-0\" (UID: \"94bea268-7e39-4aeb-a45c-8008593eb45c\") " pod="openstack/memcached-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.852647 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94bea268-7e39-4aeb-a45c-8008593eb45c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"94bea268-7e39-4aeb-a45c-8008593eb45c\") " pod="openstack/memcached-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.852666 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/94bea268-7e39-4aeb-a45c-8008593eb45c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"94bea268-7e39-4aeb-a45c-8008593eb45c\") " pod="openstack/memcached-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.852688 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9w4v\" (UniqueName: \"kubernetes.io/projected/94bea268-7e39-4aeb-a45c-8008593eb45c-kube-api-access-z9w4v\") pod \"memcached-0\" (UID: \"94bea268-7e39-4aeb-a45c-8008593eb45c\") " pod="openstack/memcached-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.853757 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/94bea268-7e39-4aeb-a45c-8008593eb45c-kolla-config\") pod \"memcached-0\" (UID: \"94bea268-7e39-4aeb-a45c-8008593eb45c\") " pod="openstack/memcached-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.854231 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94bea268-7e39-4aeb-a45c-8008593eb45c-config-data\") pod \"memcached-0\" (UID: \"94bea268-7e39-4aeb-a45c-8008593eb45c\") " pod="openstack/memcached-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.863245 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94bea268-7e39-4aeb-a45c-8008593eb45c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"94bea268-7e39-4aeb-a45c-8008593eb45c\") " pod="openstack/memcached-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.869324 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/94bea268-7e39-4aeb-a45c-8008593eb45c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"94bea268-7e39-4aeb-a45c-8008593eb45c\") " pod="openstack/memcached-0" Jan 27 10:08:42 crc kubenswrapper[4869]: I0127 10:08:42.876454 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9w4v\" (UniqueName: \"kubernetes.io/projected/94bea268-7e39-4aeb-a45c-8008593eb45c-kube-api-access-z9w4v\") pod \"memcached-0\" (UID: \"94bea268-7e39-4aeb-a45c-8008593eb45c\") " pod="openstack/memcached-0" Jan 27 10:08:43 crc kubenswrapper[4869]: I0127 10:08:43.026234 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 10:08:43 crc kubenswrapper[4869]: I0127 10:08:43.163868 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bckqb" event={"ID":"c60cfa24-5bbd-427e-be0c-428900867c80","Type":"ContainerStarted","Data":"7158b1d6f82cc7ff540903301fb4ab9080b55bffc63aa4ddf9f4a7eff8015908"} Jan 27 10:08:43 crc kubenswrapper[4869]: I0127 10:08:43.164878 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" event={"ID":"f6476143-8339-4837-8444-2bb4141d5da5","Type":"ContainerStarted","Data":"252dea5c7a5dfc7c94234e2a61a817bd839f48dc3806b5101bd7c474766e414a"} Jan 27 10:08:43 crc kubenswrapper[4869]: I0127 10:08:43.980218 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 10:08:44 crc kubenswrapper[4869]: I0127 10:08:44.528019 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 10:08:44 crc kubenswrapper[4869]: I0127 10:08:44.529166 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 10:08:44 crc kubenswrapper[4869]: I0127 10:08:44.537379 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-dmpn7" Jan 27 10:08:44 crc kubenswrapper[4869]: I0127 10:08:44.544647 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 10:08:44 crc kubenswrapper[4869]: I0127 10:08:44.592806 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb4dl\" (UniqueName: \"kubernetes.io/projected/01cc337f-b143-40db-b4d4-cc66a1549639-kube-api-access-bb4dl\") pod \"kube-state-metrics-0\" (UID: \"01cc337f-b143-40db-b4d4-cc66a1549639\") " pod="openstack/kube-state-metrics-0" Jan 27 10:08:44 crc kubenswrapper[4869]: I0127 10:08:44.694313 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb4dl\" (UniqueName: \"kubernetes.io/projected/01cc337f-b143-40db-b4d4-cc66a1549639-kube-api-access-bb4dl\") pod \"kube-state-metrics-0\" (UID: \"01cc337f-b143-40db-b4d4-cc66a1549639\") " pod="openstack/kube-state-metrics-0" Jan 27 10:08:44 crc kubenswrapper[4869]: I0127 10:08:44.714812 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb4dl\" (UniqueName: \"kubernetes.io/projected/01cc337f-b143-40db-b4d4-cc66a1549639-kube-api-access-bb4dl\") pod \"kube-state-metrics-0\" (UID: \"01cc337f-b143-40db-b4d4-cc66a1549639\") " pod="openstack/kube-state-metrics-0" Jan 27 10:08:44 crc kubenswrapper[4869]: I0127 10:08:44.852349 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 10:08:45 crc kubenswrapper[4869]: I0127 10:08:45.698021 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:08:45 crc kubenswrapper[4869]: I0127 10:08:45.698083 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:08:46 crc kubenswrapper[4869]: W0127 10:08:46.413642 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc9f9a53_b2d4_4a7f_a4ad_5fe5f6b99f80.slice/crio-f48e7442ec28467487bda984150dbf432928468bab212ad2a7be04f157b37be0 WatchSource:0}: Error finding container f48e7442ec28467487bda984150dbf432928468bab212ad2a7be04f157b37be0: Status 404 returned error can't find the container with id f48e7442ec28467487bda984150dbf432928468bab212ad2a7be04f157b37be0 Jan 27 10:08:46 crc kubenswrapper[4869]: I0127 10:08:46.854594 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 10:08:47 crc kubenswrapper[4869]: I0127 10:08:47.212025 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerStarted","Data":"f48e7442ec28467487bda984150dbf432928468bab212ad2a7be04f157b37be0"} Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.148970 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qf659"] Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.150102 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.152139 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-x5r4s" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.152377 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.158376 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.168901 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qf659"] Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.194341 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-jd977"] Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.196856 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.219245 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jd977"] Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.244824 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/795fb025-6527-42e5-b95f-119a55caf010-var-lib\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.244876 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e545b253-d74a-43e1-9a14-990ea5784f16-combined-ca-bundle\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.244904 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kclnx\" (UniqueName: \"kubernetes.io/projected/e545b253-d74a-43e1-9a14-990ea5784f16-kube-api-access-kclnx\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.244922 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/795fb025-6527-42e5-b95f-119a55caf010-scripts\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.244936 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e545b253-d74a-43e1-9a14-990ea5784f16-var-run\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.244950 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/795fb025-6527-42e5-b95f-119a55caf010-var-log\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.244965 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-982cb\" (UniqueName: \"kubernetes.io/projected/795fb025-6527-42e5-b95f-119a55caf010-kube-api-access-982cb\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.245116 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/795fb025-6527-42e5-b95f-119a55caf010-var-run\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.245165 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e545b253-d74a-43e1-9a14-990ea5784f16-scripts\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.245199 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/795fb025-6527-42e5-b95f-119a55caf010-etc-ovs\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.245248 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e545b253-d74a-43e1-9a14-990ea5784f16-ovn-controller-tls-certs\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.245356 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e545b253-d74a-43e1-9a14-990ea5784f16-var-log-ovn\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.245403 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e545b253-d74a-43e1-9a14-990ea5784f16-var-run-ovn\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.346823 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e545b253-d74a-43e1-9a14-990ea5784f16-combined-ca-bundle\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.346889 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kclnx\" (UniqueName: \"kubernetes.io/projected/e545b253-d74a-43e1-9a14-990ea5784f16-kube-api-access-kclnx\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.346909 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/795fb025-6527-42e5-b95f-119a55caf010-scripts\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.346926 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e545b253-d74a-43e1-9a14-990ea5784f16-var-run\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.346943 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/795fb025-6527-42e5-b95f-119a55caf010-var-log\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.346959 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-982cb\" (UniqueName: \"kubernetes.io/projected/795fb025-6527-42e5-b95f-119a55caf010-kube-api-access-982cb\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.346982 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/795fb025-6527-42e5-b95f-119a55caf010-var-run\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.346995 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e545b253-d74a-43e1-9a14-990ea5784f16-scripts\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.347013 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/795fb025-6527-42e5-b95f-119a55caf010-etc-ovs\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.347033 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e545b253-d74a-43e1-9a14-990ea5784f16-ovn-controller-tls-certs\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.347064 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e545b253-d74a-43e1-9a14-990ea5784f16-var-log-ovn\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.347084 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e545b253-d74a-43e1-9a14-990ea5784f16-var-run-ovn\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.347139 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/795fb025-6527-42e5-b95f-119a55caf010-var-lib\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.347631 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/795fb025-6527-42e5-b95f-119a55caf010-var-lib\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.348243 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/795fb025-6527-42e5-b95f-119a55caf010-var-run\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.348519 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e545b253-d74a-43e1-9a14-990ea5784f16-var-log-ovn\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.348706 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/795fb025-6527-42e5-b95f-119a55caf010-var-log\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.348819 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e545b253-d74a-43e1-9a14-990ea5784f16-var-run-ovn\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.348827 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e545b253-d74a-43e1-9a14-990ea5784f16-var-run\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.348930 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/795fb025-6527-42e5-b95f-119a55caf010-etc-ovs\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.350438 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/795fb025-6527-42e5-b95f-119a55caf010-scripts\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.352042 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e545b253-d74a-43e1-9a14-990ea5784f16-scripts\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.352588 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e545b253-d74a-43e1-9a14-990ea5784f16-combined-ca-bundle\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.352607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e545b253-d74a-43e1-9a14-990ea5784f16-ovn-controller-tls-certs\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.372089 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-982cb\" (UniqueName: \"kubernetes.io/projected/795fb025-6527-42e5-b95f-119a55caf010-kube-api-access-982cb\") pod \"ovn-controller-ovs-jd977\" (UID: \"795fb025-6527-42e5-b95f-119a55caf010\") " pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.375129 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kclnx\" (UniqueName: \"kubernetes.io/projected/e545b253-d74a-43e1-9a14-990ea5784f16-kube-api-access-kclnx\") pod \"ovn-controller-qf659\" (UID: \"e545b253-d74a-43e1-9a14-990ea5784f16\") " pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.466726 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qf659" Jan 27 10:08:48 crc kubenswrapper[4869]: I0127 10:08:48.520596 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:08:49 crc kubenswrapper[4869]: I0127 10:08:49.990213 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 10:08:49 crc kubenswrapper[4869]: I0127 10:08:49.993341 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:49 crc kubenswrapper[4869]: I0127 10:08:49.997878 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 10:08:49 crc kubenswrapper[4869]: I0127 10:08:49.998221 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 10:08:49 crc kubenswrapper[4869]: I0127 10:08:49.998435 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 27 10:08:49 crc kubenswrapper[4869]: I0127 10:08:49.999301 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-kv8hq" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.001097 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.024033 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.071812 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e88fbb7c-3771-4bd5-a511-af923a24a69f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.071894 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e88fbb7c-3771-4bd5-a511-af923a24a69f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.071919 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e88fbb7c-3771-4bd5-a511-af923a24a69f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.071944 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e88fbb7c-3771-4bd5-a511-af923a24a69f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.071983 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e88fbb7c-3771-4bd5-a511-af923a24a69f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.072024 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktllq\" (UniqueName: \"kubernetes.io/projected/e88fbb7c-3771-4bd5-a511-af923a24a69f-kube-api-access-ktllq\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.072043 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e88fbb7c-3771-4bd5-a511-af923a24a69f-config\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.072067 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.173318 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e88fbb7c-3771-4bd5-a511-af923a24a69f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.173391 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e88fbb7c-3771-4bd5-a511-af923a24a69f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.173416 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e88fbb7c-3771-4bd5-a511-af923a24a69f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.173444 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e88fbb7c-3771-4bd5-a511-af923a24a69f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.173542 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e88fbb7c-3771-4bd5-a511-af923a24a69f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.173608 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktllq\" (UniqueName: \"kubernetes.io/projected/e88fbb7c-3771-4bd5-a511-af923a24a69f-kube-api-access-ktllq\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.173634 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e88fbb7c-3771-4bd5-a511-af923a24a69f-config\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.173668 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.174070 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.174589 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e88fbb7c-3771-4bd5-a511-af923a24a69f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.175011 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e88fbb7c-3771-4bd5-a511-af923a24a69f-config\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.175535 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e88fbb7c-3771-4bd5-a511-af923a24a69f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.181607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e88fbb7c-3771-4bd5-a511-af923a24a69f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.188542 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e88fbb7c-3771-4bd5-a511-af923a24a69f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.196637 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e88fbb7c-3771-4bd5-a511-af923a24a69f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.200940 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktllq\" (UniqueName: \"kubernetes.io/projected/e88fbb7c-3771-4bd5-a511-af923a24a69f-kube-api-access-ktllq\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.205013 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e88fbb7c-3771-4bd5-a511-af923a24a69f\") " pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:50 crc kubenswrapper[4869]: I0127 10:08:50.322252 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 10:08:51 crc kubenswrapper[4869]: W0127 10:08:51.337283 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9622ab05_c494_4c2b_b376_6f82ded8bdc5.slice/crio-b187a8576efc004f4ee50409180b5ea313f20c502df65d7b90acacd2f8fd193a WatchSource:0}: Error finding container b187a8576efc004f4ee50409180b5ea313f20c502df65d7b90acacd2f8fd193a: Status 404 returned error can't find the container with id b187a8576efc004f4ee50409180b5ea313f20c502df65d7b90acacd2f8fd193a Jan 27 10:08:51 crc kubenswrapper[4869]: I0127 10:08:51.340828 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 10:08:51 crc kubenswrapper[4869]: I0127 10:08:51.843272 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 10:08:51 crc kubenswrapper[4869]: I0127 10:08:51.846673 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:51 crc kubenswrapper[4869]: I0127 10:08:51.848464 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-bn7cv" Jan 27 10:08:51 crc kubenswrapper[4869]: I0127 10:08:51.849268 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 10:08:51 crc kubenswrapper[4869]: I0127 10:08:51.849470 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 10:08:51 crc kubenswrapper[4869]: I0127 10:08:51.849471 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 10:08:51 crc kubenswrapper[4869]: I0127 10:08:51.856177 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 10:08:51 crc kubenswrapper[4869]: I0127 10:08:51.898589 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:51 crc kubenswrapper[4869]: I0127 10:08:51.898744 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66641dc3-4cf2-4418-905a-fe1cff14e999-config\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:51 crc kubenswrapper[4869]: I0127 10:08:51.898798 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66641dc3-4cf2-4418-905a-fe1cff14e999-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:51 crc kubenswrapper[4869]: I0127 10:08:51.898921 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66641dc3-4cf2-4418-905a-fe1cff14e999-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:51 crc kubenswrapper[4869]: I0127 10:08:51.898987 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mnnm\" (UniqueName: \"kubernetes.io/projected/66641dc3-4cf2-4418-905a-fe1cff14e999-kube-api-access-4mnnm\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:51 crc kubenswrapper[4869]: I0127 10:08:51.899069 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/66641dc3-4cf2-4418-905a-fe1cff14e999-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:51 crc kubenswrapper[4869]: I0127 10:08:51.899106 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/66641dc3-4cf2-4418-905a-fe1cff14e999-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:51 crc kubenswrapper[4869]: I0127 10:08:51.899139 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/66641dc3-4cf2-4418-905a-fe1cff14e999-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.000999 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/66641dc3-4cf2-4418-905a-fe1cff14e999-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.001065 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/66641dc3-4cf2-4418-905a-fe1cff14e999-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.001094 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/66641dc3-4cf2-4418-905a-fe1cff14e999-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.001126 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.001177 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66641dc3-4cf2-4418-905a-fe1cff14e999-config\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.001199 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66641dc3-4cf2-4418-905a-fe1cff14e999-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.001234 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66641dc3-4cf2-4418-905a-fe1cff14e999-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.001251 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mnnm\" (UniqueName: \"kubernetes.io/projected/66641dc3-4cf2-4418-905a-fe1cff14e999-kube-api-access-4mnnm\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.002139 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/66641dc3-4cf2-4418-905a-fe1cff14e999-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.002155 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.004000 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.004144 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.004239 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.006109 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66641dc3-4cf2-4418-905a-fe1cff14e999-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.012643 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66641dc3-4cf2-4418-905a-fe1cff14e999-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.012852 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/66641dc3-4cf2-4418-905a-fe1cff14e999-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.014162 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66641dc3-4cf2-4418-905a-fe1cff14e999-config\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.016336 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mnnm\" (UniqueName: \"kubernetes.io/projected/66641dc3-4cf2-4418-905a-fe1cff14e999-kube-api-access-4mnnm\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.024678 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.034396 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/66641dc3-4cf2-4418-905a-fe1cff14e999-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"66641dc3-4cf2-4418-905a-fe1cff14e999\") " pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.183404 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-bn7cv" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.192643 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.273075 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9622ab05-c494-4c2b-b376-6f82ded8bdc5","Type":"ContainerStarted","Data":"b187a8576efc004f4ee50409180b5ea313f20c502df65d7b90acacd2f8fd193a"} Jan 27 10:08:52 crc kubenswrapper[4869]: E0127 10:08:52.356149 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 10:08:52 crc kubenswrapper[4869]: E0127 10:08:52.356501 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvbwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-96jkl_openstack(89faf77c-29ff-4a98-b4b6-41d4e3240ddb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 10:08:52 crc kubenswrapper[4869]: E0127 10:08:52.358188 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-96jkl" podUID="89faf77c-29ff-4a98-b4b6-41d4e3240ddb" Jan 27 10:08:52 crc kubenswrapper[4869]: E0127 10:08:52.380090 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 10:08:52 crc kubenswrapper[4869]: E0127 10:08:52.380213 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wc9kv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-md25w_openstack(caf80245-78e1-4583-bf43-795cbf39ad22): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 10:08:52 crc kubenswrapper[4869]: E0127 10:08:52.381490 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-md25w" podUID="caf80245-78e1-4583-bf43-795cbf39ad22" Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.731813 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.918258 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.926115 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.948263 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 10:08:52 crc kubenswrapper[4869]: I0127 10:08:52.973395 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qf659"] Jan 27 10:08:53 crc kubenswrapper[4869]: I0127 10:08:53.116880 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 10:08:53 crc kubenswrapper[4869]: I0127 10:08:53.225195 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jd977"] Jan 27 10:08:53 crc kubenswrapper[4869]: I0127 10:08:53.641439 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.288690 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qf659" event={"ID":"e545b253-d74a-43e1-9a14-990ea5784f16","Type":"ContainerStarted","Data":"a8aa82a61dacc3fa6a4310a130c372c2638dc09d05a4ea7c6030bbd416d04534"} Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.289971 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-96jkl" event={"ID":"89faf77c-29ff-4a98-b4b6-41d4e3240ddb","Type":"ContainerDied","Data":"681b3bd00f9b2d5fb1b2e176e1c25b5d60c4f123b7a29227c0afb6a354695724"} Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.290011 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="681b3bd00f9b2d5fb1b2e176e1c25b5d60c4f123b7a29227c0afb6a354695724" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.290937 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerStarted","Data":"8abdfcba6606661ba2515d7515dc9e70de0b359dc8fe1076ec90042dd19cf480"} Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.292086 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"94bea268-7e39-4aeb-a45c-8008593eb45c","Type":"ContainerStarted","Data":"4c4f1c1afc2439eb179e875857428e573c0847a6b7426cc07c217bec8bba66af"} Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.293162 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"66641dc3-4cf2-4418-905a-fe1cff14e999","Type":"ContainerStarted","Data":"ae0a595ded9237de8fdc948b562e0bfcd0343af08904b3ed9f0454cc8435c8c4"} Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.294099 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"01cc337f-b143-40db-b4d4-cc66a1549639","Type":"ContainerStarted","Data":"c14c2edeae9bb88603b7d0dcb6ff513161480b82527b2c80eb8944313bfa3e20"} Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.295301 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"518f6a90-a761-4aba-9740-c3aef7d8b0c4","Type":"ContainerStarted","Data":"347050c640ec0567fdaaa6a7e31cedd415c821b21c93bbd2c85e9dea85c4ddb7"} Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.296296 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jd977" event={"ID":"795fb025-6527-42e5-b95f-119a55caf010","Type":"ContainerStarted","Data":"06e4a03292ef2b0769dd2ca8a64541b6b579a15e47f03af5f4ca43033cacbc17"} Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.297172 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-md25w" event={"ID":"caf80245-78e1-4583-bf43-795cbf39ad22","Type":"ContainerDied","Data":"a5bbd4f14ae8cef30874dcbbda44d03fc7f996c5ee25b065dd5d38f9bd83831d"} Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.297198 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5bbd4f14ae8cef30874dcbbda44d03fc7f996c5ee25b065dd5d38f9bd83831d" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.298007 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e88fbb7c-3771-4bd5-a511-af923a24a69f","Type":"ContainerStarted","Data":"5c2d930e0b6a7caa38d9024657f96c0fadbfb1150a3ed6dc9d1b88aaf7e66e94"} Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.350170 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-96jkl" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.378555 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-md25w" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.438949 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvbwx\" (UniqueName: \"kubernetes.io/projected/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-kube-api-access-vvbwx\") pod \"89faf77c-29ff-4a98-b4b6-41d4e3240ddb\" (UID: \"89faf77c-29ff-4a98-b4b6-41d4e3240ddb\") " Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.439315 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc9kv\" (UniqueName: \"kubernetes.io/projected/caf80245-78e1-4583-bf43-795cbf39ad22-kube-api-access-wc9kv\") pod \"caf80245-78e1-4583-bf43-795cbf39ad22\" (UID: \"caf80245-78e1-4583-bf43-795cbf39ad22\") " Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.439347 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf80245-78e1-4583-bf43-795cbf39ad22-config\") pod \"caf80245-78e1-4583-bf43-795cbf39ad22\" (UID: \"caf80245-78e1-4583-bf43-795cbf39ad22\") " Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.439799 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caf80245-78e1-4583-bf43-795cbf39ad22-config" (OuterVolumeSpecName: "config") pod "caf80245-78e1-4583-bf43-795cbf39ad22" (UID: "caf80245-78e1-4583-bf43-795cbf39ad22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.439952 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-dns-svc\") pod \"89faf77c-29ff-4a98-b4b6-41d4e3240ddb\" (UID: \"89faf77c-29ff-4a98-b4b6-41d4e3240ddb\") " Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.440021 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-config\") pod \"89faf77c-29ff-4a98-b4b6-41d4e3240ddb\" (UID: \"89faf77c-29ff-4a98-b4b6-41d4e3240ddb\") " Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.440201 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "89faf77c-29ff-4a98-b4b6-41d4e3240ddb" (UID: "89faf77c-29ff-4a98-b4b6-41d4e3240ddb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.440393 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-config" (OuterVolumeSpecName: "config") pod "89faf77c-29ff-4a98-b4b6-41d4e3240ddb" (UID: "89faf77c-29ff-4a98-b4b6-41d4e3240ddb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.440576 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caf80245-78e1-4583-bf43-795cbf39ad22-config\") on node \"crc\" DevicePath \"\"" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.440601 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.440617 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-config\") on node \"crc\" DevicePath \"\"" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.511084 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-kube-api-access-vvbwx" (OuterVolumeSpecName: "kube-api-access-vvbwx") pod "89faf77c-29ff-4a98-b4b6-41d4e3240ddb" (UID: "89faf77c-29ff-4a98-b4b6-41d4e3240ddb"). InnerVolumeSpecName "kube-api-access-vvbwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.511325 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caf80245-78e1-4583-bf43-795cbf39ad22-kube-api-access-wc9kv" (OuterVolumeSpecName: "kube-api-access-wc9kv") pod "caf80245-78e1-4583-bf43-795cbf39ad22" (UID: "caf80245-78e1-4583-bf43-795cbf39ad22"). InnerVolumeSpecName "kube-api-access-wc9kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.542482 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvbwx\" (UniqueName: \"kubernetes.io/projected/89faf77c-29ff-4a98-b4b6-41d4e3240ddb-kube-api-access-vvbwx\") on node \"crc\" DevicePath \"\"" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.542513 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc9kv\" (UniqueName: \"kubernetes.io/projected/caf80245-78e1-4583-bf43-795cbf39ad22-kube-api-access-wc9kv\") on node \"crc\" DevicePath \"\"" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.722917 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xl4mk"] Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.723893 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.728082 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.737256 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xl4mk"] Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.744614 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/861897b2-ebbe-48ba-851c-e0c902bf8f7f-config\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.744771 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/861897b2-ebbe-48ba-851c-e0c902bf8f7f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.746119 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9gsf\" (UniqueName: \"kubernetes.io/projected/861897b2-ebbe-48ba-851c-e0c902bf8f7f-kube-api-access-b9gsf\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.746303 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861897b2-ebbe-48ba-851c-e0c902bf8f7f-combined-ca-bundle\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.746323 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/861897b2-ebbe-48ba-851c-e0c902bf8f7f-ovn-rundir\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.746366 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/861897b2-ebbe-48ba-851c-e0c902bf8f7f-ovs-rundir\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.842005 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vbsxr"] Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.848092 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861897b2-ebbe-48ba-851c-e0c902bf8f7f-combined-ca-bundle\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.848135 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/861897b2-ebbe-48ba-851c-e0c902bf8f7f-ovn-rundir\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.848163 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/861897b2-ebbe-48ba-851c-e0c902bf8f7f-ovs-rundir\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.848197 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/861897b2-ebbe-48ba-851c-e0c902bf8f7f-config\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.848223 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/861897b2-ebbe-48ba-851c-e0c902bf8f7f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.848263 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9gsf\" (UniqueName: \"kubernetes.io/projected/861897b2-ebbe-48ba-851c-e0c902bf8f7f-kube-api-access-b9gsf\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.848771 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/861897b2-ebbe-48ba-851c-e0c902bf8f7f-ovs-rundir\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.849235 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/861897b2-ebbe-48ba-851c-e0c902bf8f7f-config\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.850673 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/861897b2-ebbe-48ba-851c-e0c902bf8f7f-ovn-rundir\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.853330 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861897b2-ebbe-48ba-851c-e0c902bf8f7f-combined-ca-bundle\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.854648 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/861897b2-ebbe-48ba-851c-e0c902bf8f7f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.868258 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-knzmv"] Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.869601 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.872095 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.873814 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9gsf\" (UniqueName: \"kubernetes.io/projected/861897b2-ebbe-48ba-851c-e0c902bf8f7f-kube-api-access-b9gsf\") pod \"ovn-controller-metrics-xl4mk\" (UID: \"861897b2-ebbe-48ba-851c-e0c902bf8f7f\") " pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.876048 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-knzmv"] Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.949797 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-knzmv\" (UID: \"9a858d35-6c3c-4280-94e3-432f7a644440\") " pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.949916 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rx4w\" (UniqueName: \"kubernetes.io/projected/9a858d35-6c3c-4280-94e3-432f7a644440-kube-api-access-2rx4w\") pod \"dnsmasq-dns-7fd796d7df-knzmv\" (UID: \"9a858d35-6c3c-4280-94e3-432f7a644440\") " pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.949960 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-config\") pod \"dnsmasq-dns-7fd796d7df-knzmv\" (UID: \"9a858d35-6c3c-4280-94e3-432f7a644440\") " pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.949979 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-knzmv\" (UID: \"9a858d35-6c3c-4280-94e3-432f7a644440\") " pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:08:54 crc kubenswrapper[4869]: I0127 10:08:54.987068 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bckqb"] Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.012745 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6dhv4"] Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.013983 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.015720 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.032172 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6dhv4"] Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.049336 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xl4mk" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.051460 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-knzmv\" (UID: \"9a858d35-6c3c-4280-94e3-432f7a644440\") " pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.051512 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6dhv4\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.051539 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6dhv4\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.051572 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h455t\" (UniqueName: \"kubernetes.io/projected/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-kube-api-access-h455t\") pod \"dnsmasq-dns-86db49b7ff-6dhv4\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.051611 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rx4w\" (UniqueName: \"kubernetes.io/projected/9a858d35-6c3c-4280-94e3-432f7a644440-kube-api-access-2rx4w\") pod \"dnsmasq-dns-7fd796d7df-knzmv\" (UID: \"9a858d35-6c3c-4280-94e3-432f7a644440\") " pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.051637 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6dhv4\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.051664 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-config\") pod \"dnsmasq-dns-86db49b7ff-6dhv4\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.051684 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-config\") pod \"dnsmasq-dns-7fd796d7df-knzmv\" (UID: \"9a858d35-6c3c-4280-94e3-432f7a644440\") " pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.051703 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-knzmv\" (UID: \"9a858d35-6c3c-4280-94e3-432f7a644440\") " pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.052780 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-knzmv\" (UID: \"9a858d35-6c3c-4280-94e3-432f7a644440\") " pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.053354 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-knzmv\" (UID: \"9a858d35-6c3c-4280-94e3-432f7a644440\") " pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.054746 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-config\") pod \"dnsmasq-dns-7fd796d7df-knzmv\" (UID: \"9a858d35-6c3c-4280-94e3-432f7a644440\") " pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.069427 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rx4w\" (UniqueName: \"kubernetes.io/projected/9a858d35-6c3c-4280-94e3-432f7a644440-kube-api-access-2rx4w\") pod \"dnsmasq-dns-7fd796d7df-knzmv\" (UID: \"9a858d35-6c3c-4280-94e3-432f7a644440\") " pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.153682 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h455t\" (UniqueName: \"kubernetes.io/projected/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-kube-api-access-h455t\") pod \"dnsmasq-dns-86db49b7ff-6dhv4\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.154078 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6dhv4\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.154121 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-config\") pod \"dnsmasq-dns-86db49b7ff-6dhv4\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.154202 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6dhv4\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.154232 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6dhv4\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.154985 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-6dhv4\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.155235 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-6dhv4\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.155589 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-config\") pod \"dnsmasq-dns-86db49b7ff-6dhv4\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.155835 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-6dhv4\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.169257 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h455t\" (UniqueName: \"kubernetes.io/projected/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-kube-api-access-h455t\") pod \"dnsmasq-dns-86db49b7ff-6dhv4\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.190943 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.308789 4869 generic.go:334] "Generic (PLEG): container finished" podID="c60cfa24-5bbd-427e-be0c-428900867c80" containerID="b033edfa82272b0c0e8f5358c768eddd5a4e47895ee401f5d344049bc3790a57" exitCode=0 Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.308887 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bckqb" event={"ID":"c60cfa24-5bbd-427e-be0c-428900867c80","Type":"ContainerDied","Data":"b033edfa82272b0c0e8f5358c768eddd5a4e47895ee401f5d344049bc3790a57"} Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.312602 4869 generic.go:334] "Generic (PLEG): container finished" podID="f6476143-8339-4837-8444-2bb4141d5da5" containerID="7e173dee52b2195998d431feeb1a3cec8c37aa65350c369d904308ca1b1b69ca" exitCode=0 Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.312672 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" event={"ID":"f6476143-8339-4837-8444-2bb4141d5da5","Type":"ContainerDied","Data":"7e173dee52b2195998d431feeb1a3cec8c37aa65350c369d904308ca1b1b69ca"} Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.315452 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-md25w" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.320940 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerStarted","Data":"f467478ea01cc678f7c6abc730ff8cc244d20d6520b05fbe2af67046c78142ce"} Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.321011 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-96jkl" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.338938 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.411445 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-96jkl"] Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.421888 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-96jkl"] Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.434167 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-md25w"] Jan 27 10:08:55 crc kubenswrapper[4869]: I0127 10:08:55.442727 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-md25w"] Jan 27 10:08:56 crc kubenswrapper[4869]: I0127 10:08:56.044903 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89faf77c-29ff-4a98-b4b6-41d4e3240ddb" path="/var/lib/kubelet/pods/89faf77c-29ff-4a98-b4b6-41d4e3240ddb/volumes" Jan 27 10:08:56 crc kubenswrapper[4869]: I0127 10:08:56.045314 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caf80245-78e1-4583-bf43-795cbf39ad22" path="/var/lib/kubelet/pods/caf80245-78e1-4583-bf43-795cbf39ad22/volumes" Jan 27 10:08:57 crc kubenswrapper[4869]: I0127 10:08:57.228531 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xl4mk"] Jan 27 10:08:57 crc kubenswrapper[4869]: I0127 10:08:57.238212 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6dhv4"] Jan 27 10:08:57 crc kubenswrapper[4869]: W0127 10:08:57.249851 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e469c0b_9a31_4e17_ad6a_8d9abfcc91b5.slice/crio-b1d6053e8f7ff4e048fca544bd5cab73bcab0365e8f51925f7af5ed51fbf3a14 WatchSource:0}: Error finding container b1d6053e8f7ff4e048fca544bd5cab73bcab0365e8f51925f7af5ed51fbf3a14: Status 404 returned error can't find the container with id b1d6053e8f7ff4e048fca544bd5cab73bcab0365e8f51925f7af5ed51fbf3a14 Jan 27 10:08:57 crc kubenswrapper[4869]: W0127 10:08:57.253918 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod861897b2_ebbe_48ba_851c_e0c902bf8f7f.slice/crio-e7304e9f553b1c238e1d18fa9b5d5ddf717fc19a60ef86e8942d65d21696a1fa WatchSource:0}: Error finding container e7304e9f553b1c238e1d18fa9b5d5ddf717fc19a60ef86e8942d65d21696a1fa: Status 404 returned error can't find the container with id e7304e9f553b1c238e1d18fa9b5d5ddf717fc19a60ef86e8942d65d21696a1fa Jan 27 10:08:57 crc kubenswrapper[4869]: I0127 10:08:57.338370 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" event={"ID":"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5","Type":"ContainerStarted","Data":"b1d6053e8f7ff4e048fca544bd5cab73bcab0365e8f51925f7af5ed51fbf3a14"} Jan 27 10:08:57 crc kubenswrapper[4869]: I0127 10:08:57.338948 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-knzmv"] Jan 27 10:08:57 crc kubenswrapper[4869]: I0127 10:08:57.341274 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9622ab05-c494-4c2b-b376-6f82ded8bdc5","Type":"ContainerStarted","Data":"30e6d16f6b6ee400b6ab60cc95b482f97db2311275b2e57019c75503d04f0e1b"} Jan 27 10:08:57 crc kubenswrapper[4869]: I0127 10:08:57.346247 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"518f6a90-a761-4aba-9740-c3aef7d8b0c4","Type":"ContainerStarted","Data":"22dab009a2bb6b259242119891a05ce8e3f87a2162c5ed0e988335ddd1d56ca7"} Jan 27 10:08:57 crc kubenswrapper[4869]: I0127 10:08:57.349690 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bckqb" event={"ID":"c60cfa24-5bbd-427e-be0c-428900867c80","Type":"ContainerStarted","Data":"c4917bcb115e95dd328454b807732fc5a9d6a14f24fa7343fc97ffdd42ccd51e"} Jan 27 10:08:57 crc kubenswrapper[4869]: I0127 10:08:57.349788 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-bckqb" Jan 27 10:08:57 crc kubenswrapper[4869]: I0127 10:08:57.349764 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-bckqb" podUID="c60cfa24-5bbd-427e-be0c-428900867c80" containerName="dnsmasq-dns" containerID="cri-o://c4917bcb115e95dd328454b807732fc5a9d6a14f24fa7343fc97ffdd42ccd51e" gracePeriod=10 Jan 27 10:08:57 crc kubenswrapper[4869]: I0127 10:08:57.352258 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xl4mk" event={"ID":"861897b2-ebbe-48ba-851c-e0c902bf8f7f","Type":"ContainerStarted","Data":"e7304e9f553b1c238e1d18fa9b5d5ddf717fc19a60ef86e8942d65d21696a1fa"} Jan 27 10:08:57 crc kubenswrapper[4869]: I0127 10:08:57.356565 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" event={"ID":"f6476143-8339-4837-8444-2bb4141d5da5","Type":"ContainerStarted","Data":"c6133f223be8f59a5400dee48da471f4f4ab13142575acce5a133a7040e447a7"} Jan 27 10:08:57 crc kubenswrapper[4869]: I0127 10:08:57.356735 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" podUID="f6476143-8339-4837-8444-2bb4141d5da5" containerName="dnsmasq-dns" containerID="cri-o://c6133f223be8f59a5400dee48da471f4f4ab13142575acce5a133a7040e447a7" gracePeriod=10 Jan 27 10:08:57 crc kubenswrapper[4869]: I0127 10:08:57.356941 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" Jan 27 10:08:57 crc kubenswrapper[4869]: I0127 10:08:57.376721 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" podStartSLOduration=8.082932031 podStartE2EDuration="19.376705815s" podCreationTimestamp="2026-01-27 10:08:38 +0000 UTC" firstStartedPulling="2026-01-27 10:08:42.829036965 +0000 UTC m=+891.449461048" lastFinishedPulling="2026-01-27 10:08:54.122810739 +0000 UTC m=+902.743234832" observedRunningTime="2026-01-27 10:08:57.370954887 +0000 UTC m=+905.991378990" watchObservedRunningTime="2026-01-27 10:08:57.376705815 +0000 UTC m=+905.997129898" Jan 27 10:08:57 crc kubenswrapper[4869]: I0127 10:08:57.389738 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-bckqb" podStartSLOduration=7.993251739 podStartE2EDuration="19.389721222s" podCreationTimestamp="2026-01-27 10:08:38 +0000 UTC" firstStartedPulling="2026-01-27 10:08:42.806136445 +0000 UTC m=+891.426560528" lastFinishedPulling="2026-01-27 10:08:54.202605938 +0000 UTC m=+902.823030011" observedRunningTime="2026-01-27 10:08:57.386716204 +0000 UTC m=+906.007140277" watchObservedRunningTime="2026-01-27 10:08:57.389721222 +0000 UTC m=+906.010145305" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.178202 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.179909 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bckqb" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.305571 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6476143-8339-4837-8444-2bb4141d5da5-dns-svc\") pod \"f6476143-8339-4837-8444-2bb4141d5da5\" (UID: \"f6476143-8339-4837-8444-2bb4141d5da5\") " Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.305626 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c60cfa24-5bbd-427e-be0c-428900867c80-dns-svc\") pod \"c60cfa24-5bbd-427e-be0c-428900867c80\" (UID: \"c60cfa24-5bbd-427e-be0c-428900867c80\") " Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.305651 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wjpd\" (UniqueName: \"kubernetes.io/projected/f6476143-8339-4837-8444-2bb4141d5da5-kube-api-access-8wjpd\") pod \"f6476143-8339-4837-8444-2bb4141d5da5\" (UID: \"f6476143-8339-4837-8444-2bb4141d5da5\") " Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.305728 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c60cfa24-5bbd-427e-be0c-428900867c80-config\") pod \"c60cfa24-5bbd-427e-be0c-428900867c80\" (UID: \"c60cfa24-5bbd-427e-be0c-428900867c80\") " Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.305815 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prlmx\" (UniqueName: \"kubernetes.io/projected/c60cfa24-5bbd-427e-be0c-428900867c80-kube-api-access-prlmx\") pod \"c60cfa24-5bbd-427e-be0c-428900867c80\" (UID: \"c60cfa24-5bbd-427e-be0c-428900867c80\") " Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.305862 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6476143-8339-4837-8444-2bb4141d5da5-config\") pod \"f6476143-8339-4837-8444-2bb4141d5da5\" (UID: \"f6476143-8339-4837-8444-2bb4141d5da5\") " Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.311459 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c60cfa24-5bbd-427e-be0c-428900867c80-kube-api-access-prlmx" (OuterVolumeSpecName: "kube-api-access-prlmx") pod "c60cfa24-5bbd-427e-be0c-428900867c80" (UID: "c60cfa24-5bbd-427e-be0c-428900867c80"). InnerVolumeSpecName "kube-api-access-prlmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.312156 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6476143-8339-4837-8444-2bb4141d5da5-kube-api-access-8wjpd" (OuterVolumeSpecName: "kube-api-access-8wjpd") pod "f6476143-8339-4837-8444-2bb4141d5da5" (UID: "f6476143-8339-4837-8444-2bb4141d5da5"). InnerVolumeSpecName "kube-api-access-8wjpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.353766 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6476143-8339-4837-8444-2bb4141d5da5-config" (OuterVolumeSpecName: "config") pod "f6476143-8339-4837-8444-2bb4141d5da5" (UID: "f6476143-8339-4837-8444-2bb4141d5da5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.371663 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerStarted","Data":"4a438d6694b54e570c6899662ea865d06be06fa8ba35abc67332c2d580cd3da4"} Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.375810 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c60cfa24-5bbd-427e-be0c-428900867c80-config" (OuterVolumeSpecName: "config") pod "c60cfa24-5bbd-427e-be0c-428900867c80" (UID: "c60cfa24-5bbd-427e-be0c-428900867c80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.377542 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c60cfa24-5bbd-427e-be0c-428900867c80-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c60cfa24-5bbd-427e-be0c-428900867c80" (UID: "c60cfa24-5bbd-427e-be0c-428900867c80"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.383060 4869 generic.go:334] "Generic (PLEG): container finished" podID="c60cfa24-5bbd-427e-be0c-428900867c80" containerID="c4917bcb115e95dd328454b807732fc5a9d6a14f24fa7343fc97ffdd42ccd51e" exitCode=0 Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.383115 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bckqb" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.383135 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bckqb" event={"ID":"c60cfa24-5bbd-427e-be0c-428900867c80","Type":"ContainerDied","Data":"c4917bcb115e95dd328454b807732fc5a9d6a14f24fa7343fc97ffdd42ccd51e"} Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.383162 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bckqb" event={"ID":"c60cfa24-5bbd-427e-be0c-428900867c80","Type":"ContainerDied","Data":"7158b1d6f82cc7ff540903301fb4ab9080b55bffc63aa4ddf9f4a7eff8015908"} Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.383178 4869 scope.go:117] "RemoveContainer" containerID="c4917bcb115e95dd328454b807732fc5a9d6a14f24fa7343fc97ffdd42ccd51e" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.384449 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6476143-8339-4837-8444-2bb4141d5da5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f6476143-8339-4837-8444-2bb4141d5da5" (UID: "f6476143-8339-4837-8444-2bb4141d5da5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.385169 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" event={"ID":"9a858d35-6c3c-4280-94e3-432f7a644440","Type":"ContainerStarted","Data":"6e960ba8b3a8c4910c93507877dd6f3fb1986cf5064d0bf4d390de706d42b5a2"} Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.402422 4869 generic.go:334] "Generic (PLEG): container finished" podID="f6476143-8339-4837-8444-2bb4141d5da5" containerID="c6133f223be8f59a5400dee48da471f4f4ab13142575acce5a133a7040e447a7" exitCode=0 Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.402630 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" event={"ID":"f6476143-8339-4837-8444-2bb4141d5da5","Type":"ContainerDied","Data":"c6133f223be8f59a5400dee48da471f4f4ab13142575acce5a133a7040e447a7"} Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.402672 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" event={"ID":"f6476143-8339-4837-8444-2bb4141d5da5","Type":"ContainerDied","Data":"252dea5c7a5dfc7c94234e2a61a817bd839f48dc3806b5101bd7c474766e414a"} Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.402714 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-vbsxr" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.410426 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6476143-8339-4837-8444-2bb4141d5da5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.410460 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c60cfa24-5bbd-427e-be0c-428900867c80-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.410474 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wjpd\" (UniqueName: \"kubernetes.io/projected/f6476143-8339-4837-8444-2bb4141d5da5-kube-api-access-8wjpd\") on node \"crc\" DevicePath \"\"" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.410485 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c60cfa24-5bbd-427e-be0c-428900867c80-config\") on node \"crc\" DevicePath \"\"" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.410494 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prlmx\" (UniqueName: \"kubernetes.io/projected/c60cfa24-5bbd-427e-be0c-428900867c80-kube-api-access-prlmx\") on node \"crc\" DevicePath \"\"" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.410502 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6476143-8339-4837-8444-2bb4141d5da5-config\") on node \"crc\" DevicePath \"\"" Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.428238 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bckqb"] Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.435335 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bckqb"] Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.439905 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vbsxr"] Jan 27 10:08:58 crc kubenswrapper[4869]: I0127 10:08:58.446017 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-vbsxr"] Jan 27 10:09:00 crc kubenswrapper[4869]: I0127 10:09:00.041081 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c60cfa24-5bbd-427e-be0c-428900867c80" path="/var/lib/kubelet/pods/c60cfa24-5bbd-427e-be0c-428900867c80/volumes" Jan 27 10:09:00 crc kubenswrapper[4869]: I0127 10:09:00.041953 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6476143-8339-4837-8444-2bb4141d5da5" path="/var/lib/kubelet/pods/f6476143-8339-4837-8444-2bb4141d5da5/volumes" Jan 27 10:09:00 crc kubenswrapper[4869]: I0127 10:09:00.418444 4869 generic.go:334] "Generic (PLEG): container finished" podID="518f6a90-a761-4aba-9740-c3aef7d8b0c4" containerID="22dab009a2bb6b259242119891a05ce8e3f87a2162c5ed0e988335ddd1d56ca7" exitCode=0 Jan 27 10:09:00 crc kubenswrapper[4869]: I0127 10:09:00.418486 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"518f6a90-a761-4aba-9740-c3aef7d8b0c4","Type":"ContainerDied","Data":"22dab009a2bb6b259242119891a05ce8e3f87a2162c5ed0e988335ddd1d56ca7"} Jan 27 10:09:00 crc kubenswrapper[4869]: I0127 10:09:00.420761 4869 scope.go:117] "RemoveContainer" containerID="b033edfa82272b0c0e8f5358c768eddd5a4e47895ee401f5d344049bc3790a57" Jan 27 10:09:01 crc kubenswrapper[4869]: I0127 10:09:01.427346 4869 generic.go:334] "Generic (PLEG): container finished" podID="9622ab05-c494-4c2b-b376-6f82ded8bdc5" containerID="30e6d16f6b6ee400b6ab60cc95b482f97db2311275b2e57019c75503d04f0e1b" exitCode=0 Jan 27 10:09:01 crc kubenswrapper[4869]: I0127 10:09:01.427435 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9622ab05-c494-4c2b-b376-6f82ded8bdc5","Type":"ContainerDied","Data":"30e6d16f6b6ee400b6ab60cc95b482f97db2311275b2e57019c75503d04f0e1b"} Jan 27 10:09:06 crc kubenswrapper[4869]: I0127 10:09:06.954299 4869 scope.go:117] "RemoveContainer" containerID="c4917bcb115e95dd328454b807732fc5a9d6a14f24fa7343fc97ffdd42ccd51e" Jan 27 10:09:06 crc kubenswrapper[4869]: E0127 10:09:06.955434 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4917bcb115e95dd328454b807732fc5a9d6a14f24fa7343fc97ffdd42ccd51e\": container with ID starting with c4917bcb115e95dd328454b807732fc5a9d6a14f24fa7343fc97ffdd42ccd51e not found: ID does not exist" containerID="c4917bcb115e95dd328454b807732fc5a9d6a14f24fa7343fc97ffdd42ccd51e" Jan 27 10:09:06 crc kubenswrapper[4869]: I0127 10:09:06.955485 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4917bcb115e95dd328454b807732fc5a9d6a14f24fa7343fc97ffdd42ccd51e"} err="failed to get container status \"c4917bcb115e95dd328454b807732fc5a9d6a14f24fa7343fc97ffdd42ccd51e\": rpc error: code = NotFound desc = could not find container \"c4917bcb115e95dd328454b807732fc5a9d6a14f24fa7343fc97ffdd42ccd51e\": container with ID starting with c4917bcb115e95dd328454b807732fc5a9d6a14f24fa7343fc97ffdd42ccd51e not found: ID does not exist" Jan 27 10:09:06 crc kubenswrapper[4869]: I0127 10:09:06.955520 4869 scope.go:117] "RemoveContainer" containerID="b033edfa82272b0c0e8f5358c768eddd5a4e47895ee401f5d344049bc3790a57" Jan 27 10:09:06 crc kubenswrapper[4869]: E0127 10:09:06.955950 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b033edfa82272b0c0e8f5358c768eddd5a4e47895ee401f5d344049bc3790a57\": container with ID starting with b033edfa82272b0c0e8f5358c768eddd5a4e47895ee401f5d344049bc3790a57 not found: ID does not exist" containerID="b033edfa82272b0c0e8f5358c768eddd5a4e47895ee401f5d344049bc3790a57" Jan 27 10:09:06 crc kubenswrapper[4869]: I0127 10:09:06.955979 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b033edfa82272b0c0e8f5358c768eddd5a4e47895ee401f5d344049bc3790a57"} err="failed to get container status \"b033edfa82272b0c0e8f5358c768eddd5a4e47895ee401f5d344049bc3790a57\": rpc error: code = NotFound desc = could not find container \"b033edfa82272b0c0e8f5358c768eddd5a4e47895ee401f5d344049bc3790a57\": container with ID starting with b033edfa82272b0c0e8f5358c768eddd5a4e47895ee401f5d344049bc3790a57 not found: ID does not exist" Jan 27 10:09:06 crc kubenswrapper[4869]: I0127 10:09:06.956000 4869 scope.go:117] "RemoveContainer" containerID="c6133f223be8f59a5400dee48da471f4f4ab13142575acce5a133a7040e447a7" Jan 27 10:09:07 crc kubenswrapper[4869]: I0127 10:09:07.474224 4869 generic.go:334] "Generic (PLEG): container finished" podID="0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5" containerID="d3740dfb68c44ffb7fffb66a92bd29a92913023df008bc7641d6d225c77634aa" exitCode=0 Jan 27 10:09:07 crc kubenswrapper[4869]: I0127 10:09:07.474285 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" event={"ID":"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5","Type":"ContainerDied","Data":"d3740dfb68c44ffb7fffb66a92bd29a92913023df008bc7641d6d225c77634aa"} Jan 27 10:09:07 crc kubenswrapper[4869]: I0127 10:09:07.721494 4869 scope.go:117] "RemoveContainer" containerID="7e173dee52b2195998d431feeb1a3cec8c37aa65350c369d904308ca1b1b69ca" Jan 27 10:09:08 crc kubenswrapper[4869]: I0127 10:09:08.365032 4869 scope.go:117] "RemoveContainer" containerID="c6133f223be8f59a5400dee48da471f4f4ab13142575acce5a133a7040e447a7" Jan 27 10:09:08 crc kubenswrapper[4869]: E0127 10:09:08.366216 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6133f223be8f59a5400dee48da471f4f4ab13142575acce5a133a7040e447a7\": container with ID starting with c6133f223be8f59a5400dee48da471f4f4ab13142575acce5a133a7040e447a7 not found: ID does not exist" containerID="c6133f223be8f59a5400dee48da471f4f4ab13142575acce5a133a7040e447a7" Jan 27 10:09:08 crc kubenswrapper[4869]: I0127 10:09:08.366262 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6133f223be8f59a5400dee48da471f4f4ab13142575acce5a133a7040e447a7"} err="failed to get container status \"c6133f223be8f59a5400dee48da471f4f4ab13142575acce5a133a7040e447a7\": rpc error: code = NotFound desc = could not find container \"c6133f223be8f59a5400dee48da471f4f4ab13142575acce5a133a7040e447a7\": container with ID starting with c6133f223be8f59a5400dee48da471f4f4ab13142575acce5a133a7040e447a7 not found: ID does not exist" Jan 27 10:09:08 crc kubenswrapper[4869]: I0127 10:09:08.366284 4869 scope.go:117] "RemoveContainer" containerID="7e173dee52b2195998d431feeb1a3cec8c37aa65350c369d904308ca1b1b69ca" Jan 27 10:09:08 crc kubenswrapper[4869]: E0127 10:09:08.366677 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e173dee52b2195998d431feeb1a3cec8c37aa65350c369d904308ca1b1b69ca\": container with ID starting with 7e173dee52b2195998d431feeb1a3cec8c37aa65350c369d904308ca1b1b69ca not found: ID does not exist" containerID="7e173dee52b2195998d431feeb1a3cec8c37aa65350c369d904308ca1b1b69ca" Jan 27 10:09:08 crc kubenswrapper[4869]: I0127 10:09:08.366695 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e173dee52b2195998d431feeb1a3cec8c37aa65350c369d904308ca1b1b69ca"} err="failed to get container status \"7e173dee52b2195998d431feeb1a3cec8c37aa65350c369d904308ca1b1b69ca\": rpc error: code = NotFound desc = could not find container \"7e173dee52b2195998d431feeb1a3cec8c37aa65350c369d904308ca1b1b69ca\": container with ID starting with 7e173dee52b2195998d431feeb1a3cec8c37aa65350c369d904308ca1b1b69ca not found: ID does not exist" Jan 27 10:09:08 crc kubenswrapper[4869]: I0127 10:09:08.524974 4869 generic.go:334] "Generic (PLEG): container finished" podID="9a858d35-6c3c-4280-94e3-432f7a644440" containerID="ccc26590f8b46477237dc7bb94b192f7cd77fa8b169e74a2f7cd9fb9f66f576f" exitCode=0 Jan 27 10:09:08 crc kubenswrapper[4869]: I0127 10:09:08.525015 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" event={"ID":"9a858d35-6c3c-4280-94e3-432f7a644440","Type":"ContainerDied","Data":"ccc26590f8b46477237dc7bb94b192f7cd77fa8b169e74a2f7cd9fb9f66f576f"} Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.540141 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e88fbb7c-3771-4bd5-a511-af923a24a69f","Type":"ContainerStarted","Data":"bf9201199a4ea8eed9f120f44f662dd6294de5b6a22da4ee7b91aaa9a93302ae"} Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.541009 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e88fbb7c-3771-4bd5-a511-af923a24a69f","Type":"ContainerStarted","Data":"c1e5a21acf6d9d8500b59c219cf365e1ec9a0bbe1fcdda6c41f24532ec6382b5"} Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.543023 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" event={"ID":"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5","Type":"ContainerStarted","Data":"85dbb6fd84162640527cdac258cc1cd28e8446189b73e3f09660d6676dbe770e"} Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.543200 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.546469 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9622ab05-c494-4c2b-b376-6f82ded8bdc5","Type":"ContainerStarted","Data":"99a59fc749767639506bb966fe27997b0740352a0e31335df5ec5f2ef065f118"} Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.549628 4869 generic.go:334] "Generic (PLEG): container finished" podID="795fb025-6527-42e5-b95f-119a55caf010" containerID="8a430115d9c355c2584dabeac591fbf460b7d5617bfc04a8ffb45dfd83b143ef" exitCode=0 Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.549752 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jd977" event={"ID":"795fb025-6527-42e5-b95f-119a55caf010","Type":"ContainerDied","Data":"8a430115d9c355c2584dabeac591fbf460b7d5617bfc04a8ffb45dfd83b143ef"} Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.552990 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"01cc337f-b143-40db-b4d4-cc66a1549639","Type":"ContainerStarted","Data":"110f5c6779c0f188d0cba669ff079c47c8ea0a78a50641aff44b0e1feeae9611"} Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.554032 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.563592 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"518f6a90-a761-4aba-9740-c3aef7d8b0c4","Type":"ContainerStarted","Data":"839b688d311e56b5f8e06bd2b21f34463c394ee365d58d39319363c36e4fce7a"} Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.567572 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xl4mk" event={"ID":"861897b2-ebbe-48ba-851c-e0c902bf8f7f","Type":"ContainerStarted","Data":"57e3d372699e98b9788a7d76122786c4a33db3fc4cd13d0b73ad49288e0d25a6"} Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.576952 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" event={"ID":"9a858d35-6c3c-4280-94e3-432f7a644440","Type":"ContainerStarted","Data":"751c9385ce867b9867b74e81c59c9c98c3b2d1980f35abfd2d0fa5d86a4477a6"} Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.577967 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.593365 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.019559087 podStartE2EDuration="21.593336882s" podCreationTimestamp="2026-01-27 10:08:48 +0000 UTC" firstStartedPulling="2026-01-27 10:08:54.152397486 +0000 UTC m=+902.772821569" lastFinishedPulling="2026-01-27 10:09:07.726175281 +0000 UTC m=+916.346599364" observedRunningTime="2026-01-27 10:09:09.578187953 +0000 UTC m=+918.198612066" watchObservedRunningTime="2026-01-27 10:09:09.593336882 +0000 UTC m=+918.213760975" Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.595308 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"94bea268-7e39-4aeb-a45c-8008593eb45c","Type":"ContainerStarted","Data":"0628b94977e9f5570b2fd997f07faacc0375fe8eab257a73f9052a9410101417"} Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.595735 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.606758 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"66641dc3-4cf2-4418-905a-fe1cff14e999","Type":"ContainerStarted","Data":"ffb8a4ec2139aefa85ab5c62070a9889b56e0e412e88e0947d0a9d3ca924db3a"} Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.606862 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"66641dc3-4cf2-4418-905a-fe1cff14e999","Type":"ContainerStarted","Data":"f1f847bedf38ac09139ccf5bd463904f564d9d2f03d463accd650bf98dd46f85"} Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.609647 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qf659" event={"ID":"e545b253-d74a-43e1-9a14-990ea5784f16","Type":"ContainerStarted","Data":"32ba826822de351dc9770f9f7b671a7b2ed2f2d1b19f43b2bac4badfb0185bae"} Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.609823 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-qf659" Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.611208 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" podStartSLOduration=15.611187443 podStartE2EDuration="15.611187443s" podCreationTimestamp="2026-01-27 10:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:09:09.60554211 +0000 UTC m=+918.225966213" watchObservedRunningTime="2026-01-27 10:09:09.611187443 +0000 UTC m=+918.231611536" Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.623616 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=11.384167694 podStartE2EDuration="25.623598278s" podCreationTimestamp="2026-01-27 10:08:44 +0000 UTC" firstStartedPulling="2026-01-27 10:08:54.20265684 +0000 UTC m=+902.823080923" lastFinishedPulling="2026-01-27 10:09:08.442087434 +0000 UTC m=+917.062511507" observedRunningTime="2026-01-27 10:09:09.622463579 +0000 UTC m=+918.242887672" watchObservedRunningTime="2026-01-27 10:09:09.623598278 +0000 UTC m=+918.244022371" Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.662352 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=26.065784128 podStartE2EDuration="28.662332164s" podCreationTimestamp="2026-01-27 10:08:41 +0000 UTC" firstStartedPulling="2026-01-27 10:08:54.149666857 +0000 UTC m=+902.770090940" lastFinishedPulling="2026-01-27 10:08:56.746214873 +0000 UTC m=+905.366638976" observedRunningTime="2026-01-27 10:09:09.655142368 +0000 UTC m=+918.275566471" watchObservedRunningTime="2026-01-27 10:09:09.662332164 +0000 UTC m=+918.282756257" Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.686973 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" podStartSLOduration=15.686948697 podStartE2EDuration="15.686948697s" podCreationTimestamp="2026-01-27 10:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:09:09.670810185 +0000 UTC m=+918.291234288" watchObservedRunningTime="2026-01-27 10:09:09.686948697 +0000 UTC m=+918.307372790" Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.718943 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=24.241105684 podStartE2EDuration="29.718924462s" podCreationTimestamp="2026-01-27 10:08:40 +0000 UTC" firstStartedPulling="2026-01-27 10:08:51.340330849 +0000 UTC m=+899.960754972" lastFinishedPulling="2026-01-27 10:08:56.818149667 +0000 UTC m=+905.438573750" observedRunningTime="2026-01-27 10:09:09.709279072 +0000 UTC m=+918.329703155" watchObservedRunningTime="2026-01-27 10:09:09.718924462 +0000 UTC m=+918.339348545" Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.727393 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xl4mk" podStartSLOduration=4.572800069 podStartE2EDuration="15.727378342s" podCreationTimestamp="2026-01-27 10:08:54 +0000 UTC" firstStartedPulling="2026-01-27 10:08:57.280181169 +0000 UTC m=+905.900605252" lastFinishedPulling="2026-01-27 10:09:08.434759442 +0000 UTC m=+917.055183525" observedRunningTime="2026-01-27 10:09:09.722341529 +0000 UTC m=+918.342765612" watchObservedRunningTime="2026-01-27 10:09:09.727378342 +0000 UTC m=+918.347802425" Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.774203 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.124775612 podStartE2EDuration="19.774185934s" podCreationTimestamp="2026-01-27 10:08:50 +0000 UTC" firstStartedPulling="2026-01-27 10:08:54.202607608 +0000 UTC m=+902.823031691" lastFinishedPulling="2026-01-27 10:09:07.85201792 +0000 UTC m=+916.472442013" observedRunningTime="2026-01-27 10:09:09.770231458 +0000 UTC m=+918.390655531" watchObservedRunningTime="2026-01-27 10:09:09.774185934 +0000 UTC m=+918.394610017" Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.794005 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=20.150222844 podStartE2EDuration="27.793987842s" podCreationTimestamp="2026-01-27 10:08:42 +0000 UTC" firstStartedPulling="2026-01-27 10:08:54.136282259 +0000 UTC m=+902.756706342" lastFinishedPulling="2026-01-27 10:09:01.780047257 +0000 UTC m=+910.400471340" observedRunningTime="2026-01-27 10:09:09.78985937 +0000 UTC m=+918.410283453" watchObservedRunningTime="2026-01-27 10:09:09.793987842 +0000 UTC m=+918.414411925" Jan 27 10:09:09 crc kubenswrapper[4869]: I0127 10:09:09.807715 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qf659" podStartSLOduration=7.675396696 podStartE2EDuration="21.807697882s" podCreationTimestamp="2026-01-27 10:08:48 +0000 UTC" firstStartedPulling="2026-01-27 10:08:54.202428102 +0000 UTC m=+902.822852185" lastFinishedPulling="2026-01-27 10:09:08.334729288 +0000 UTC m=+916.955153371" observedRunningTime="2026-01-27 10:09:09.804851014 +0000 UTC m=+918.425275097" watchObservedRunningTime="2026-01-27 10:09:09.807697882 +0000 UTC m=+918.428121965" Jan 27 10:09:10 crc kubenswrapper[4869]: I0127 10:09:10.194142 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 10:09:10 crc kubenswrapper[4869]: I0127 10:09:10.323468 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 10:09:10 crc kubenswrapper[4869]: I0127 10:09:10.624318 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jd977" event={"ID":"795fb025-6527-42e5-b95f-119a55caf010","Type":"ContainerStarted","Data":"fd5be30d0910f908c60e65649c153aad4eb3c51fed592c9c29d7c398f52f05ef"} Jan 27 10:09:10 crc kubenswrapper[4869]: I0127 10:09:10.624363 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jd977" event={"ID":"795fb025-6527-42e5-b95f-119a55caf010","Type":"ContainerStarted","Data":"e95a6b513355fc133fd1705439e8291a47852aea60b4fe54c0fa8164e38e3f0e"} Jan 27 10:09:10 crc kubenswrapper[4869]: I0127 10:09:10.626035 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:09:10 crc kubenswrapper[4869]: I0127 10:09:10.669672 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-jd977" podStartSLOduration=9.357248213 podStartE2EDuration="22.669647745s" podCreationTimestamp="2026-01-27 10:08:48 +0000 UTC" firstStartedPulling="2026-01-27 10:08:54.136912949 +0000 UTC m=+902.757337032" lastFinishedPulling="2026-01-27 10:09:07.449312481 +0000 UTC m=+916.069736564" observedRunningTime="2026-01-27 10:09:10.664499769 +0000 UTC m=+919.284923862" watchObservedRunningTime="2026-01-27 10:09:10.669647745 +0000 UTC m=+919.290071838" Jan 27 10:09:11 crc kubenswrapper[4869]: I0127 10:09:11.323693 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 10:09:11 crc kubenswrapper[4869]: I0127 10:09:11.359332 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 10:09:11 crc kubenswrapper[4869]: I0127 10:09:11.633467 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:09:11 crc kubenswrapper[4869]: I0127 10:09:11.675197 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 10:09:11 crc kubenswrapper[4869]: I0127 10:09:11.675666 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 10:09:12 crc kubenswrapper[4869]: I0127 10:09:12.193961 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 10:09:12 crc kubenswrapper[4869]: I0127 10:09:12.442817 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 10:09:12 crc kubenswrapper[4869]: I0127 10:09:12.442863 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 10:09:13 crc kubenswrapper[4869]: I0127 10:09:13.029659 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 10:09:13 crc kubenswrapper[4869]: I0127 10:09:13.250057 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 10:09:13 crc kubenswrapper[4869]: I0127 10:09:13.686416 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 10:09:14 crc kubenswrapper[4869]: I0127 10:09:14.248373 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 10:09:14 crc kubenswrapper[4869]: I0127 10:09:14.336160 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 10:09:14 crc kubenswrapper[4869]: I0127 10:09:14.865525 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 10:09:14 crc kubenswrapper[4869]: I0127 10:09:14.929722 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-knzmv"] Jan 27 10:09:14 crc kubenswrapper[4869]: I0127 10:09:14.930016 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" podUID="9a858d35-6c3c-4280-94e3-432f7a644440" containerName="dnsmasq-dns" containerID="cri-o://751c9385ce867b9867b74e81c59c9c98c3b2d1980f35abfd2d0fa5d86a4477a6" gracePeriod=10 Jan 27 10:09:14 crc kubenswrapper[4869]: I0127 10:09:14.935097 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:09:14 crc kubenswrapper[4869]: I0127 10:09:14.972518 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-4z8cs"] Jan 27 10:09:14 crc kubenswrapper[4869]: E0127 10:09:14.972839 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60cfa24-5bbd-427e-be0c-428900867c80" containerName="init" Jan 27 10:09:14 crc kubenswrapper[4869]: I0127 10:09:14.972853 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60cfa24-5bbd-427e-be0c-428900867c80" containerName="init" Jan 27 10:09:14 crc kubenswrapper[4869]: E0127 10:09:14.972875 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60cfa24-5bbd-427e-be0c-428900867c80" containerName="dnsmasq-dns" Jan 27 10:09:14 crc kubenswrapper[4869]: I0127 10:09:14.972881 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60cfa24-5bbd-427e-be0c-428900867c80" containerName="dnsmasq-dns" Jan 27 10:09:14 crc kubenswrapper[4869]: E0127 10:09:14.972904 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6476143-8339-4837-8444-2bb4141d5da5" containerName="dnsmasq-dns" Jan 27 10:09:14 crc kubenswrapper[4869]: I0127 10:09:14.972911 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6476143-8339-4837-8444-2bb4141d5da5" containerName="dnsmasq-dns" Jan 27 10:09:14 crc kubenswrapper[4869]: E0127 10:09:14.972926 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6476143-8339-4837-8444-2bb4141d5da5" containerName="init" Jan 27 10:09:14 crc kubenswrapper[4869]: I0127 10:09:14.972932 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6476143-8339-4837-8444-2bb4141d5da5" containerName="init" Jan 27 10:09:14 crc kubenswrapper[4869]: I0127 10:09:14.973079 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c60cfa24-5bbd-427e-be0c-428900867c80" containerName="dnsmasq-dns" Jan 27 10:09:14 crc kubenswrapper[4869]: I0127 10:09:14.973093 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6476143-8339-4837-8444-2bb4141d5da5" containerName="dnsmasq-dns" Jan 27 10:09:14 crc kubenswrapper[4869]: I0127 10:09:14.973819 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.003768 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-4z8cs"] Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.119915 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-dns-svc\") pod \"dnsmasq-dns-698758b865-4z8cs\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.120100 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-4z8cs\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.120194 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcg5m\" (UniqueName: \"kubernetes.io/projected/b6cd084b-c383-4201-972c-227cedc088a4-kube-api-access-xcg5m\") pod \"dnsmasq-dns-698758b865-4z8cs\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.120340 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-4z8cs\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.120442 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-config\") pod \"dnsmasq-dns-698758b865-4z8cs\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.222060 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcg5m\" (UniqueName: \"kubernetes.io/projected/b6cd084b-c383-4201-972c-227cedc088a4-kube-api-access-xcg5m\") pod \"dnsmasq-dns-698758b865-4z8cs\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.222102 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-4z8cs\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.222132 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-config\") pod \"dnsmasq-dns-698758b865-4z8cs\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.222214 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-dns-svc\") pod \"dnsmasq-dns-698758b865-4z8cs\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.222248 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-4z8cs\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.223295 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-4z8cs\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.223295 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-dns-svc\") pod \"dnsmasq-dns-698758b865-4z8cs\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.223500 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-4z8cs\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.224720 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-config\") pod \"dnsmasq-dns-698758b865-4z8cs\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.254333 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcg5m\" (UniqueName: \"kubernetes.io/projected/b6cd084b-c383-4201-972c-227cedc088a4-kube-api-access-xcg5m\") pod \"dnsmasq-dns-698758b865-4z8cs\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.337160 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.341111 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.364379 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.446437 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.533438 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-ovsdbserver-nb\") pod \"9a858d35-6c3c-4280-94e3-432f7a644440\" (UID: \"9a858d35-6c3c-4280-94e3-432f7a644440\") " Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.533742 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-config\") pod \"9a858d35-6c3c-4280-94e3-432f7a644440\" (UID: \"9a858d35-6c3c-4280-94e3-432f7a644440\") " Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.533905 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rx4w\" (UniqueName: \"kubernetes.io/projected/9a858d35-6c3c-4280-94e3-432f7a644440-kube-api-access-2rx4w\") pod \"9a858d35-6c3c-4280-94e3-432f7a644440\" (UID: \"9a858d35-6c3c-4280-94e3-432f7a644440\") " Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.533947 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-dns-svc\") pod \"9a858d35-6c3c-4280-94e3-432f7a644440\" (UID: \"9a858d35-6c3c-4280-94e3-432f7a644440\") " Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.586635 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a858d35-6c3c-4280-94e3-432f7a644440-kube-api-access-2rx4w" (OuterVolumeSpecName: "kube-api-access-2rx4w") pod "9a858d35-6c3c-4280-94e3-432f7a644440" (UID: "9a858d35-6c3c-4280-94e3-432f7a644440"). InnerVolumeSpecName "kube-api-access-2rx4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.587016 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 10:09:15 crc kubenswrapper[4869]: E0127 10:09:15.587404 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a858d35-6c3c-4280-94e3-432f7a644440" containerName="dnsmasq-dns" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.587422 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a858d35-6c3c-4280-94e3-432f7a644440" containerName="dnsmasq-dns" Jan 27 10:09:15 crc kubenswrapper[4869]: E0127 10:09:15.587446 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a858d35-6c3c-4280-94e3-432f7a644440" containerName="init" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.587454 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a858d35-6c3c-4280-94e3-432f7a644440" containerName="init" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.587589 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a858d35-6c3c-4280-94e3-432f7a644440" containerName="dnsmasq-dns" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.588806 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.590848 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-qw7tz" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.591175 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.591848 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.591966 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.593029 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.613296 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9a858d35-6c3c-4280-94e3-432f7a644440" (UID: "9a858d35-6c3c-4280-94e3-432f7a644440"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.615661 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9a858d35-6c3c-4280-94e3-432f7a644440" (UID: "9a858d35-6c3c-4280-94e3-432f7a644440"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.623666 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-config" (OuterVolumeSpecName: "config") pod "9a858d35-6c3c-4280-94e3-432f7a644440" (UID: "9a858d35-6c3c-4280-94e3-432f7a644440"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.635532 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rx4w\" (UniqueName: \"kubernetes.io/projected/9a858d35-6c3c-4280-94e3-432f7a644440-kube-api-access-2rx4w\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.635574 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.635588 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.635601 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a858d35-6c3c-4280-94e3-432f7a644440-config\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.667384 4869 generic.go:334] "Generic (PLEG): container finished" podID="9a858d35-6c3c-4280-94e3-432f7a644440" containerID="751c9385ce867b9867b74e81c59c9c98c3b2d1980f35abfd2d0fa5d86a4477a6" exitCode=0 Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.667424 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" event={"ID":"9a858d35-6c3c-4280-94e3-432f7a644440","Type":"ContainerDied","Data":"751c9385ce867b9867b74e81c59c9c98c3b2d1980f35abfd2d0fa5d86a4477a6"} Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.667454 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" event={"ID":"9a858d35-6c3c-4280-94e3-432f7a644440","Type":"ContainerDied","Data":"6e960ba8b3a8c4910c93507877dd6f3fb1986cf5064d0bf4d390de706d42b5a2"} Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.667472 4869 scope.go:117] "RemoveContainer" containerID="751c9385ce867b9867b74e81c59c9c98c3b2d1980f35abfd2d0fa5d86a4477a6" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.667613 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.697209 4869 scope.go:117] "RemoveContainer" containerID="ccc26590f8b46477237dc7bb94b192f7cd77fa8b169e74a2f7cd9fb9f66f576f" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.697390 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.697434 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.717254 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-knzmv"] Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.722259 4869 scope.go:117] "RemoveContainer" containerID="751c9385ce867b9867b74e81c59c9c98c3b2d1980f35abfd2d0fa5d86a4477a6" Jan 27 10:09:15 crc kubenswrapper[4869]: E0127 10:09:15.722672 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"751c9385ce867b9867b74e81c59c9c98c3b2d1980f35abfd2d0fa5d86a4477a6\": container with ID starting with 751c9385ce867b9867b74e81c59c9c98c3b2d1980f35abfd2d0fa5d86a4477a6 not found: ID does not exist" containerID="751c9385ce867b9867b74e81c59c9c98c3b2d1980f35abfd2d0fa5d86a4477a6" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.722726 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"751c9385ce867b9867b74e81c59c9c98c3b2d1980f35abfd2d0fa5d86a4477a6"} err="failed to get container status \"751c9385ce867b9867b74e81c59c9c98c3b2d1980f35abfd2d0fa5d86a4477a6\": rpc error: code = NotFound desc = could not find container \"751c9385ce867b9867b74e81c59c9c98c3b2d1980f35abfd2d0fa5d86a4477a6\": container with ID starting with 751c9385ce867b9867b74e81c59c9c98c3b2d1980f35abfd2d0fa5d86a4477a6 not found: ID does not exist" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.722751 4869 scope.go:117] "RemoveContainer" containerID="ccc26590f8b46477237dc7bb94b192f7cd77fa8b169e74a2f7cd9fb9f66f576f" Jan 27 10:09:15 crc kubenswrapper[4869]: E0127 10:09:15.723202 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccc26590f8b46477237dc7bb94b192f7cd77fa8b169e74a2f7cd9fb9f66f576f\": container with ID starting with ccc26590f8b46477237dc7bb94b192f7cd77fa8b169e74a2f7cd9fb9f66f576f not found: ID does not exist" containerID="ccc26590f8b46477237dc7bb94b192f7cd77fa8b169e74a2f7cd9fb9f66f576f" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.723241 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccc26590f8b46477237dc7bb94b192f7cd77fa8b169e74a2f7cd9fb9f66f576f"} err="failed to get container status \"ccc26590f8b46477237dc7bb94b192f7cd77fa8b169e74a2f7cd9fb9f66f576f\": rpc error: code = NotFound desc = could not find container \"ccc26590f8b46477237dc7bb94b192f7cd77fa8b169e74a2f7cd9fb9f66f576f\": container with ID starting with ccc26590f8b46477237dc7bb94b192f7cd77fa8b169e74a2f7cd9fb9f66f576f not found: ID does not exist" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.726353 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-knzmv"] Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.739012 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0ddfa973-a8e8-4003-a986-61838793a923-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.739070 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ddfa973-a8e8-4003-a986-61838793a923-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.739111 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ddfa973-a8e8-4003-a986-61838793a923-scripts\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.739137 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ddfa973-a8e8-4003-a986-61838793a923-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.739161 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srj4l\" (UniqueName: \"kubernetes.io/projected/0ddfa973-a8e8-4003-a986-61838793a923-kube-api-access-srj4l\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.739207 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ddfa973-a8e8-4003-a986-61838793a923-config\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.739259 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ddfa973-a8e8-4003-a986-61838793a923-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.847081 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ddfa973-a8e8-4003-a986-61838793a923-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.847316 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0ddfa973-a8e8-4003-a986-61838793a923-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.847340 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ddfa973-a8e8-4003-a986-61838793a923-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.847391 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ddfa973-a8e8-4003-a986-61838793a923-scripts\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.847427 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ddfa973-a8e8-4003-a986-61838793a923-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.847451 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srj4l\" (UniqueName: \"kubernetes.io/projected/0ddfa973-a8e8-4003-a986-61838793a923-kube-api-access-srj4l\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.847520 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ddfa973-a8e8-4003-a986-61838793a923-config\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.848665 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ddfa973-a8e8-4003-a986-61838793a923-config\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.853407 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ddfa973-a8e8-4003-a986-61838793a923-scripts\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.853656 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ddfa973-a8e8-4003-a986-61838793a923-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.854121 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0ddfa973-a8e8-4003-a986-61838793a923-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.857105 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ddfa973-a8e8-4003-a986-61838793a923-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.857244 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ddfa973-a8e8-4003-a986-61838793a923-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.867774 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-4z8cs"] Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.878015 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srj4l\" (UniqueName: \"kubernetes.io/projected/0ddfa973-a8e8-4003-a986-61838793a923-kube-api-access-srj4l\") pod \"ovn-northd-0\" (UID: \"0ddfa973-a8e8-4003-a986-61838793a923\") " pod="openstack/ovn-northd-0" Jan 27 10:09:15 crc kubenswrapper[4869]: I0127 10:09:15.916006 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.043349 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a858d35-6c3c-4280-94e3-432f7a644440" path="/var/lib/kubelet/pods/9a858d35-6c3c-4280-94e3-432f7a644440/volumes" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.110069 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.114907 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.118220 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.118412 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-fc8sc" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.118542 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.118665 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.134571 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.252857 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-lock\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.253448 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.253533 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.253579 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-cache\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.253639 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfsx8\" (UniqueName: \"kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-kube-api-access-nfsx8\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.253725 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.331038 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.354880 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-lock\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.354930 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.354954 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.354977 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-cache\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.355018 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfsx8\" (UniqueName: \"kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-kube-api-access-nfsx8\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.355040 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.355463 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.355504 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-cache\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: E0127 10:09:16.355576 4869 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 10:09:16 crc kubenswrapper[4869]: E0127 10:09:16.355618 4869 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.355571 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-lock\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: E0127 10:09:16.355684 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift podName:0429a74c-af6a-45f1-9ca2-b66dcd47ca38 nodeName:}" failed. No retries permitted until 2026-01-27 10:09:16.855660752 +0000 UTC m=+925.476084915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift") pod "swift-storage-0" (UID: "0429a74c-af6a-45f1-9ca2-b66dcd47ca38") : configmap "swift-ring-files" not found Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.361652 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.378911 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfsx8\" (UniqueName: \"kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-kube-api-access-nfsx8\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.398164 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.582472 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.678133 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0ddfa973-a8e8-4003-a986-61838793a923","Type":"ContainerStarted","Data":"105714736c06597fbba8454b2e1246f653cffff0530830cae036aa2efe175b59"} Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.678332 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.681590 4869 generic.go:334] "Generic (PLEG): container finished" podID="b6cd084b-c383-4201-972c-227cedc088a4" containerID="9bc93b923bcee84df645c6a0144d72da0afd1f937cbcff7c569af3135b86dba9" exitCode=0 Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.681749 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-4z8cs" event={"ID":"b6cd084b-c383-4201-972c-227cedc088a4","Type":"ContainerDied","Data":"9bc93b923bcee84df645c6a0144d72da0afd1f937cbcff7c569af3135b86dba9"} Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.681812 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-4z8cs" event={"ID":"b6cd084b-c383-4201-972c-227cedc088a4","Type":"ContainerStarted","Data":"c93573e658a0fef38c215bc59a0f2c30cff15aba9a7874971d0ed29508741497"} Jan 27 10:09:16 crc kubenswrapper[4869]: I0127 10:09:16.862238 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:16 crc kubenswrapper[4869]: E0127 10:09:16.862370 4869 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 10:09:16 crc kubenswrapper[4869]: E0127 10:09:16.862606 4869 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 10:09:16 crc kubenswrapper[4869]: E0127 10:09:16.862661 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift podName:0429a74c-af6a-45f1-9ca2-b66dcd47ca38 nodeName:}" failed. No retries permitted until 2026-01-27 10:09:17.862642901 +0000 UTC m=+926.483066994 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift") pod "swift-storage-0" (UID: "0429a74c-af6a-45f1-9ca2-b66dcd47ca38") : configmap "swift-ring-files" not found Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.630094 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-57j2k"] Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.634134 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57j2k" Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.653902 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-57j2k"] Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.689854 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-4z8cs" event={"ID":"b6cd084b-c383-4201-972c-227cedc088a4","Type":"ContainerStarted","Data":"9e5b64ae8a8eeb0182776baef2bdc04823c0fa0b338e226f243f10a42b151827"} Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.690600 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.691195 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0ddfa973-a8e8-4003-a986-61838793a923","Type":"ContainerStarted","Data":"1d9353057714a98821c9a17fb08de0b79f456866e3ba0efe9786babc3af7e7b8"} Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.711739 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-4z8cs" podStartSLOduration=3.711720614 podStartE2EDuration="3.711720614s" podCreationTimestamp="2026-01-27 10:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:09:17.706296178 +0000 UTC m=+926.326720271" watchObservedRunningTime="2026-01-27 10:09:17.711720614 +0000 UTC m=+926.332144697" Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.775761 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2064d31-adb6-40dd-9bb8-c05cb35b3519-catalog-content\") pod \"redhat-operators-57j2k\" (UID: \"d2064d31-adb6-40dd-9bb8-c05cb35b3519\") " pod="openshift-marketplace/redhat-operators-57j2k" Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.775803 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2064d31-adb6-40dd-9bb8-c05cb35b3519-utilities\") pod \"redhat-operators-57j2k\" (UID: \"d2064d31-adb6-40dd-9bb8-c05cb35b3519\") " pod="openshift-marketplace/redhat-operators-57j2k" Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.775879 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w794b\" (UniqueName: \"kubernetes.io/projected/d2064d31-adb6-40dd-9bb8-c05cb35b3519-kube-api-access-w794b\") pod \"redhat-operators-57j2k\" (UID: \"d2064d31-adb6-40dd-9bb8-c05cb35b3519\") " pod="openshift-marketplace/redhat-operators-57j2k" Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.876801 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2064d31-adb6-40dd-9bb8-c05cb35b3519-catalog-content\") pod \"redhat-operators-57j2k\" (UID: \"d2064d31-adb6-40dd-9bb8-c05cb35b3519\") " pod="openshift-marketplace/redhat-operators-57j2k" Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.877130 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2064d31-adb6-40dd-9bb8-c05cb35b3519-utilities\") pod \"redhat-operators-57j2k\" (UID: \"d2064d31-adb6-40dd-9bb8-c05cb35b3519\") " pod="openshift-marketplace/redhat-operators-57j2k" Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.877174 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.877197 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w794b\" (UniqueName: \"kubernetes.io/projected/d2064d31-adb6-40dd-9bb8-c05cb35b3519-kube-api-access-w794b\") pod \"redhat-operators-57j2k\" (UID: \"d2064d31-adb6-40dd-9bb8-c05cb35b3519\") " pod="openshift-marketplace/redhat-operators-57j2k" Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.877283 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2064d31-adb6-40dd-9bb8-c05cb35b3519-catalog-content\") pod \"redhat-operators-57j2k\" (UID: \"d2064d31-adb6-40dd-9bb8-c05cb35b3519\") " pod="openshift-marketplace/redhat-operators-57j2k" Jan 27 10:09:17 crc kubenswrapper[4869]: E0127 10:09:17.877330 4869 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 10:09:17 crc kubenswrapper[4869]: E0127 10:09:17.877347 4869 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 10:09:17 crc kubenswrapper[4869]: E0127 10:09:17.877390 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift podName:0429a74c-af6a-45f1-9ca2-b66dcd47ca38 nodeName:}" failed. No retries permitted until 2026-01-27 10:09:19.877374555 +0000 UTC m=+928.497798638 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift") pod "swift-storage-0" (UID: "0429a74c-af6a-45f1-9ca2-b66dcd47ca38") : configmap "swift-ring-files" not found Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.877517 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2064d31-adb6-40dd-9bb8-c05cb35b3519-utilities\") pod \"redhat-operators-57j2k\" (UID: \"d2064d31-adb6-40dd-9bb8-c05cb35b3519\") " pod="openshift-marketplace/redhat-operators-57j2k" Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.896099 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w794b\" (UniqueName: \"kubernetes.io/projected/d2064d31-adb6-40dd-9bb8-c05cb35b3519-kube-api-access-w794b\") pod \"redhat-operators-57j2k\" (UID: \"d2064d31-adb6-40dd-9bb8-c05cb35b3519\") " pod="openshift-marketplace/redhat-operators-57j2k" Jan 27 10:09:17 crc kubenswrapper[4869]: I0127 10:09:17.977664 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57j2k" Jan 27 10:09:18 crc kubenswrapper[4869]: I0127 10:09:18.382730 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-57j2k"] Jan 27 10:09:18 crc kubenswrapper[4869]: I0127 10:09:18.698700 4869 generic.go:334] "Generic (PLEG): container finished" podID="d2064d31-adb6-40dd-9bb8-c05cb35b3519" containerID="04555c959f7029caa84ccdeb3f7c4c8db71ffe197803916457ba7802492a6559" exitCode=0 Jan 27 10:09:18 crc kubenswrapper[4869]: I0127 10:09:18.698770 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57j2k" event={"ID":"d2064d31-adb6-40dd-9bb8-c05cb35b3519","Type":"ContainerDied","Data":"04555c959f7029caa84ccdeb3f7c4c8db71ffe197803916457ba7802492a6559"} Jan 27 10:09:18 crc kubenswrapper[4869]: I0127 10:09:18.699048 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57j2k" event={"ID":"d2064d31-adb6-40dd-9bb8-c05cb35b3519","Type":"ContainerStarted","Data":"60308b6ed926da0a9987ed91bfb116717b4113db5c271baefff9e849c257cedb"} Jan 27 10:09:18 crc kubenswrapper[4869]: I0127 10:09:18.701090 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0ddfa973-a8e8-4003-a986-61838793a923","Type":"ContainerStarted","Data":"d063ecb15227e31dd7e3aced662bef873e4d99cb8182e802001dce51f8cca373"} Jan 27 10:09:18 crc kubenswrapper[4869]: I0127 10:09:18.701376 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 10:09:18 crc kubenswrapper[4869]: I0127 10:09:18.737361 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.695331172 podStartE2EDuration="3.73734256s" podCreationTimestamp="2026-01-27 10:09:15 +0000 UTC" firstStartedPulling="2026-01-27 10:09:16.333305887 +0000 UTC m=+924.953729990" lastFinishedPulling="2026-01-27 10:09:17.375317305 +0000 UTC m=+925.995741378" observedRunningTime="2026-01-27 10:09:18.736092628 +0000 UTC m=+927.356516721" watchObservedRunningTime="2026-01-27 10:09:18.73734256 +0000 UTC m=+927.357766643" Jan 27 10:09:19 crc kubenswrapper[4869]: I0127 10:09:19.711971 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57j2k" event={"ID":"d2064d31-adb6-40dd-9bb8-c05cb35b3519","Type":"ContainerStarted","Data":"5ff0ed6930fdbab4da8ae83b8076bfb8f8c51a2dda55aba05691cc2adf979c40"} Jan 27 10:09:19 crc kubenswrapper[4869]: I0127 10:09:19.910699 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:19 crc kubenswrapper[4869]: E0127 10:09:19.910921 4869 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 10:09:19 crc kubenswrapper[4869]: E0127 10:09:19.911102 4869 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 10:09:19 crc kubenswrapper[4869]: E0127 10:09:19.911176 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift podName:0429a74c-af6a-45f1-9ca2-b66dcd47ca38 nodeName:}" failed. No retries permitted until 2026-01-27 10:09:23.911154321 +0000 UTC m=+932.531578424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift") pod "swift-storage-0" (UID: "0429a74c-af6a-45f1-9ca2-b66dcd47ca38") : configmap "swift-ring-files" not found Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.017511 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-q2sjr"] Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.021466 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.029282 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.029282 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.038326 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.066865 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-nn56w"] Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.068150 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.081968 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nn56w"] Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.089763 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-q2sjr"] Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.102286 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-q2sjr"] Jan 27 10:09:20 crc kubenswrapper[4869]: E0127 10:09:20.102995 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-56b8l ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-q2sjr" podUID="f79b81a1-5d2b-472c-a9c1-5928f343f4ce" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.113884 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-combined-ca-bundle\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.113977 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-swiftconf\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.114129 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-scripts\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.114359 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-ring-data-devices\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.114522 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-etc-swift\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.114623 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-dispersionconf\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.114649 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56b8l\" (UniqueName: \"kubernetes.io/projected/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-kube-api-access-56b8l\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.191509 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fd796d7df-knzmv" podUID="9a858d35-6c3c-4280-94e3-432f7a644440" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.105:5353: i/o timeout" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.215629 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f91198cd-1581-4ca7-9be2-98da975eefd7-etc-swift\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.215674 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f91198cd-1581-4ca7-9be2-98da975eefd7-scripts\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.215711 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-swiftconf\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.215932 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-scripts\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.216068 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-ring-data-devices\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.216130 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-dispersionconf\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.216176 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-combined-ca-bundle\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.216221 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-etc-swift\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.216390 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-dispersionconf\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.216453 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56b8l\" (UniqueName: \"kubernetes.io/projected/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-kube-api-access-56b8l\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.216567 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-combined-ca-bundle\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.216660 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qllvd\" (UniqueName: \"kubernetes.io/projected/f91198cd-1581-4ca7-9be2-98da975eefd7-kube-api-access-qllvd\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.216695 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-etc-swift\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.216702 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f91198cd-1581-4ca7-9be2-98da975eefd7-ring-data-devices\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.216750 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-ring-data-devices\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.216895 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-swiftconf\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.217279 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-scripts\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.222532 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-dispersionconf\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.222902 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-swiftconf\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.230465 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-combined-ca-bundle\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.232406 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56b8l\" (UniqueName: \"kubernetes.io/projected/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-kube-api-access-56b8l\") pod \"swift-ring-rebalance-q2sjr\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.318874 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f91198cd-1581-4ca7-9be2-98da975eefd7-etc-swift\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.318987 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f91198cd-1581-4ca7-9be2-98da975eefd7-scripts\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.319060 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-swiftconf\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.319257 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-dispersionconf\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.319327 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-combined-ca-bundle\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.319342 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f91198cd-1581-4ca7-9be2-98da975eefd7-etc-swift\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.319544 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qllvd\" (UniqueName: \"kubernetes.io/projected/f91198cd-1581-4ca7-9be2-98da975eefd7-kube-api-access-qllvd\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.319617 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f91198cd-1581-4ca7-9be2-98da975eefd7-ring-data-devices\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.320624 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f91198cd-1581-4ca7-9be2-98da975eefd7-scripts\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.321556 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f91198cd-1581-4ca7-9be2-98da975eefd7-ring-data-devices\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.322179 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-swiftconf\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.322562 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-dispersionconf\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.323132 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-combined-ca-bundle\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.338923 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qllvd\" (UniqueName: \"kubernetes.io/projected/f91198cd-1581-4ca7-9be2-98da975eefd7-kube-api-access-qllvd\") pod \"swift-ring-rebalance-nn56w\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.386550 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.391505 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-k9lxv"] Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.392710 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k9lxv" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.394942 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.412680 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-k9lxv"] Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.523268 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn2qd\" (UniqueName: \"kubernetes.io/projected/615c41f2-860d-4920-9b46-133877ae2067-kube-api-access-kn2qd\") pod \"root-account-create-update-k9lxv\" (UID: \"615c41f2-860d-4920-9b46-133877ae2067\") " pod="openstack/root-account-create-update-k9lxv" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.523332 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/615c41f2-860d-4920-9b46-133877ae2067-operator-scripts\") pod \"root-account-create-update-k9lxv\" (UID: \"615c41f2-860d-4920-9b46-133877ae2067\") " pod="openstack/root-account-create-update-k9lxv" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.625071 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/615c41f2-860d-4920-9b46-133877ae2067-operator-scripts\") pod \"root-account-create-update-k9lxv\" (UID: \"615c41f2-860d-4920-9b46-133877ae2067\") " pod="openstack/root-account-create-update-k9lxv" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.625427 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn2qd\" (UniqueName: \"kubernetes.io/projected/615c41f2-860d-4920-9b46-133877ae2067-kube-api-access-kn2qd\") pod \"root-account-create-update-k9lxv\" (UID: \"615c41f2-860d-4920-9b46-133877ae2067\") " pod="openstack/root-account-create-update-k9lxv" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.626183 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/615c41f2-860d-4920-9b46-133877ae2067-operator-scripts\") pod \"root-account-create-update-k9lxv\" (UID: \"615c41f2-860d-4920-9b46-133877ae2067\") " pod="openstack/root-account-create-update-k9lxv" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.648367 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn2qd\" (UniqueName: \"kubernetes.io/projected/615c41f2-860d-4920-9b46-133877ae2067-kube-api-access-kn2qd\") pod \"root-account-create-update-k9lxv\" (UID: \"615c41f2-860d-4920-9b46-133877ae2067\") " pod="openstack/root-account-create-update-k9lxv" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.721391 4869 generic.go:334] "Generic (PLEG): container finished" podID="d2064d31-adb6-40dd-9bb8-c05cb35b3519" containerID="5ff0ed6930fdbab4da8ae83b8076bfb8f8c51a2dda55aba05691cc2adf979c40" exitCode=0 Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.721437 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57j2k" event={"ID":"d2064d31-adb6-40dd-9bb8-c05cb35b3519","Type":"ContainerDied","Data":"5ff0ed6930fdbab4da8ae83b8076bfb8f8c51a2dda55aba05691cc2adf979c40"} Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.721465 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.740019 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.790382 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k9lxv" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.832267 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-combined-ca-bundle\") pod \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.832569 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56b8l\" (UniqueName: \"kubernetes.io/projected/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-kube-api-access-56b8l\") pod \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.832612 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-ring-data-devices\") pod \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.832684 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-scripts\") pod \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.832709 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-etc-swift\") pod \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.832785 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-dispersionconf\") pod \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.832860 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-swiftconf\") pod \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\" (UID: \"f79b81a1-5d2b-472c-a9c1-5928f343f4ce\") " Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.833407 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-scripts" (OuterVolumeSpecName: "scripts") pod "f79b81a1-5d2b-472c-a9c1-5928f343f4ce" (UID: "f79b81a1-5d2b-472c-a9c1-5928f343f4ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.835720 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f79b81a1-5d2b-472c-a9c1-5928f343f4ce" (UID: "f79b81a1-5d2b-472c-a9c1-5928f343f4ce"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.835802 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f79b81a1-5d2b-472c-a9c1-5928f343f4ce" (UID: "f79b81a1-5d2b-472c-a9c1-5928f343f4ce"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.837164 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f79b81a1-5d2b-472c-a9c1-5928f343f4ce" (UID: "f79b81a1-5d2b-472c-a9c1-5928f343f4ce"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.837222 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-kube-api-access-56b8l" (OuterVolumeSpecName: "kube-api-access-56b8l") pod "f79b81a1-5d2b-472c-a9c1-5928f343f4ce" (UID: "f79b81a1-5d2b-472c-a9c1-5928f343f4ce"). InnerVolumeSpecName "kube-api-access-56b8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.838156 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f79b81a1-5d2b-472c-a9c1-5928f343f4ce" (UID: "f79b81a1-5d2b-472c-a9c1-5928f343f4ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.838942 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f79b81a1-5d2b-472c-a9c1-5928f343f4ce" (UID: "f79b81a1-5d2b-472c-a9c1-5928f343f4ce"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.854433 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nn56w"] Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.935506 4869 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.935553 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.935573 4869 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.935591 4869 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.935608 4869 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.935627 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:20 crc kubenswrapper[4869]: I0127 10:09:20.935644 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56b8l\" (UniqueName: \"kubernetes.io/projected/f79b81a1-5d2b-472c-a9c1-5928f343f4ce-kube-api-access-56b8l\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:21 crc kubenswrapper[4869]: I0127 10:09:21.262475 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-k9lxv"] Jan 27 10:09:21 crc kubenswrapper[4869]: W0127 10:09:21.266447 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod615c41f2_860d_4920_9b46_133877ae2067.slice/crio-fb0a8ebb24cfc34923232b03663ceb22ec2038c630242de9fa8e0d1137c60d00 WatchSource:0}: Error finding container fb0a8ebb24cfc34923232b03663ceb22ec2038c630242de9fa8e0d1137c60d00: Status 404 returned error can't find the container with id fb0a8ebb24cfc34923232b03663ceb22ec2038c630242de9fa8e0d1137c60d00 Jan 27 10:09:21 crc kubenswrapper[4869]: I0127 10:09:21.735992 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nn56w" event={"ID":"f91198cd-1581-4ca7-9be2-98da975eefd7","Type":"ContainerStarted","Data":"155bbcc8a03239f988e1631b99667bf80458a97bd88dbfe34c41eef1ca15d7a4"} Jan 27 10:09:21 crc kubenswrapper[4869]: I0127 10:09:21.738901 4869 generic.go:334] "Generic (PLEG): container finished" podID="615c41f2-860d-4920-9b46-133877ae2067" containerID="20b3dda39de376453986365ef05a1863ba7f4c0cbf7917a8a7915c2b654753b8" exitCode=0 Jan 27 10:09:21 crc kubenswrapper[4869]: I0127 10:09:21.738996 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q2sjr" Jan 27 10:09:21 crc kubenswrapper[4869]: I0127 10:09:21.739811 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k9lxv" event={"ID":"615c41f2-860d-4920-9b46-133877ae2067","Type":"ContainerDied","Data":"20b3dda39de376453986365ef05a1863ba7f4c0cbf7917a8a7915c2b654753b8"} Jan 27 10:09:21 crc kubenswrapper[4869]: I0127 10:09:21.739878 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k9lxv" event={"ID":"615c41f2-860d-4920-9b46-133877ae2067","Type":"ContainerStarted","Data":"fb0a8ebb24cfc34923232b03663ceb22ec2038c630242de9fa8e0d1137c60d00"} Jan 27 10:09:21 crc kubenswrapper[4869]: I0127 10:09:21.810689 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-q2sjr"] Jan 27 10:09:21 crc kubenswrapper[4869]: I0127 10:09:21.822497 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-q2sjr"] Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.044440 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79b81a1-5d2b-472c-a9c1-5928f343f4ce" path="/var/lib/kubelet/pods/f79b81a1-5d2b-472c-a9c1-5928f343f4ce/volumes" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.407372 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hcxqz"] Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.410244 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hcxqz" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.420215 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hcxqz"] Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.561399 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8sng\" (UniqueName: \"kubernetes.io/projected/a8954ce1-4dee-4849-b0a5-26461590a6a0-kube-api-access-g8sng\") pod \"community-operators-hcxqz\" (UID: \"a8954ce1-4dee-4849-b0a5-26461590a6a0\") " pod="openshift-marketplace/community-operators-hcxqz" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.561801 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8954ce1-4dee-4849-b0a5-26461590a6a0-catalog-content\") pod \"community-operators-hcxqz\" (UID: \"a8954ce1-4dee-4849-b0a5-26461590a6a0\") " pod="openshift-marketplace/community-operators-hcxqz" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.561959 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8954ce1-4dee-4849-b0a5-26461590a6a0-utilities\") pod \"community-operators-hcxqz\" (UID: \"a8954ce1-4dee-4849-b0a5-26461590a6a0\") " pod="openshift-marketplace/community-operators-hcxqz" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.589964 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-4dqbq"] Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.592156 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4dqbq" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.603364 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4dqbq"] Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.663474 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8sng\" (UniqueName: \"kubernetes.io/projected/a8954ce1-4dee-4849-b0a5-26461590a6a0-kube-api-access-g8sng\") pod \"community-operators-hcxqz\" (UID: \"a8954ce1-4dee-4849-b0a5-26461590a6a0\") " pod="openshift-marketplace/community-operators-hcxqz" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.664037 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8954ce1-4dee-4849-b0a5-26461590a6a0-catalog-content\") pod \"community-operators-hcxqz\" (UID: \"a8954ce1-4dee-4849-b0a5-26461590a6a0\") " pod="openshift-marketplace/community-operators-hcxqz" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.664068 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zp4l\" (UniqueName: \"kubernetes.io/projected/5ada9983-506d-4de8-9d7d-8f7fc1bcb50f-kube-api-access-7zp4l\") pod \"keystone-db-create-4dqbq\" (UID: \"5ada9983-506d-4de8-9d7d-8f7fc1bcb50f\") " pod="openstack/keystone-db-create-4dqbq" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.664093 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ada9983-506d-4de8-9d7d-8f7fc1bcb50f-operator-scripts\") pod \"keystone-db-create-4dqbq\" (UID: \"5ada9983-506d-4de8-9d7d-8f7fc1bcb50f\") " pod="openstack/keystone-db-create-4dqbq" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.664132 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8954ce1-4dee-4849-b0a5-26461590a6a0-utilities\") pod \"community-operators-hcxqz\" (UID: \"a8954ce1-4dee-4849-b0a5-26461590a6a0\") " pod="openshift-marketplace/community-operators-hcxqz" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.664496 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8954ce1-4dee-4849-b0a5-26461590a6a0-utilities\") pod \"community-operators-hcxqz\" (UID: \"a8954ce1-4dee-4849-b0a5-26461590a6a0\") " pod="openshift-marketplace/community-operators-hcxqz" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.664713 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8954ce1-4dee-4849-b0a5-26461590a6a0-catalog-content\") pod \"community-operators-hcxqz\" (UID: \"a8954ce1-4dee-4849-b0a5-26461590a6a0\") " pod="openshift-marketplace/community-operators-hcxqz" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.689946 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8sng\" (UniqueName: \"kubernetes.io/projected/a8954ce1-4dee-4849-b0a5-26461590a6a0-kube-api-access-g8sng\") pod \"community-operators-hcxqz\" (UID: \"a8954ce1-4dee-4849-b0a5-26461590a6a0\") " pod="openshift-marketplace/community-operators-hcxqz" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.708203 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3ac0-account-create-update-s57tn"] Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.709458 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3ac0-account-create-update-s57tn" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.711188 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.718538 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3ac0-account-create-update-s57tn"] Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.732398 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hcxqz" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.754241 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57j2k" event={"ID":"d2064d31-adb6-40dd-9bb8-c05cb35b3519","Type":"ContainerStarted","Data":"e8f86c4616646554c52b7e073d6ec6e67f5385932c083afbd6f17249e8892242"} Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.765476 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2tff\" (UniqueName: \"kubernetes.io/projected/9ec835b0-a5a2-4a65-ad57-8282ba92fc1c-kube-api-access-r2tff\") pod \"keystone-3ac0-account-create-update-s57tn\" (UID: \"9ec835b0-a5a2-4a65-ad57-8282ba92fc1c\") " pod="openstack/keystone-3ac0-account-create-update-s57tn" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.765556 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zp4l\" (UniqueName: \"kubernetes.io/projected/5ada9983-506d-4de8-9d7d-8f7fc1bcb50f-kube-api-access-7zp4l\") pod \"keystone-db-create-4dqbq\" (UID: \"5ada9983-506d-4de8-9d7d-8f7fc1bcb50f\") " pod="openstack/keystone-db-create-4dqbq" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.765581 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ada9983-506d-4de8-9d7d-8f7fc1bcb50f-operator-scripts\") pod \"keystone-db-create-4dqbq\" (UID: \"5ada9983-506d-4de8-9d7d-8f7fc1bcb50f\") " pod="openstack/keystone-db-create-4dqbq" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.765611 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ec835b0-a5a2-4a65-ad57-8282ba92fc1c-operator-scripts\") pod \"keystone-3ac0-account-create-update-s57tn\" (UID: \"9ec835b0-a5a2-4a65-ad57-8282ba92fc1c\") " pod="openstack/keystone-3ac0-account-create-update-s57tn" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.766635 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ada9983-506d-4de8-9d7d-8f7fc1bcb50f-operator-scripts\") pod \"keystone-db-create-4dqbq\" (UID: \"5ada9983-506d-4de8-9d7d-8f7fc1bcb50f\") " pod="openstack/keystone-db-create-4dqbq" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.784058 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-57j2k" podStartSLOduration=2.329203646 podStartE2EDuration="5.784017938s" podCreationTimestamp="2026-01-27 10:09:17 +0000 UTC" firstStartedPulling="2026-01-27 10:09:18.700789958 +0000 UTC m=+927.321214041" lastFinishedPulling="2026-01-27 10:09:22.15560425 +0000 UTC m=+930.776028333" observedRunningTime="2026-01-27 10:09:22.778457947 +0000 UTC m=+931.398882030" watchObservedRunningTime="2026-01-27 10:09:22.784017938 +0000 UTC m=+931.404442021" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.795888 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zp4l\" (UniqueName: \"kubernetes.io/projected/5ada9983-506d-4de8-9d7d-8f7fc1bcb50f-kube-api-access-7zp4l\") pod \"keystone-db-create-4dqbq\" (UID: \"5ada9983-506d-4de8-9d7d-8f7fc1bcb50f\") " pod="openstack/keystone-db-create-4dqbq" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.867479 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2tff\" (UniqueName: \"kubernetes.io/projected/9ec835b0-a5a2-4a65-ad57-8282ba92fc1c-kube-api-access-r2tff\") pod \"keystone-3ac0-account-create-update-s57tn\" (UID: \"9ec835b0-a5a2-4a65-ad57-8282ba92fc1c\") " pod="openstack/keystone-3ac0-account-create-update-s57tn" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.867578 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ec835b0-a5a2-4a65-ad57-8282ba92fc1c-operator-scripts\") pod \"keystone-3ac0-account-create-update-s57tn\" (UID: \"9ec835b0-a5a2-4a65-ad57-8282ba92fc1c\") " pod="openstack/keystone-3ac0-account-create-update-s57tn" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.868287 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ec835b0-a5a2-4a65-ad57-8282ba92fc1c-operator-scripts\") pod \"keystone-3ac0-account-create-update-s57tn\" (UID: \"9ec835b0-a5a2-4a65-ad57-8282ba92fc1c\") " pod="openstack/keystone-3ac0-account-create-update-s57tn" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.891675 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2tff\" (UniqueName: \"kubernetes.io/projected/9ec835b0-a5a2-4a65-ad57-8282ba92fc1c-kube-api-access-r2tff\") pod \"keystone-3ac0-account-create-update-s57tn\" (UID: \"9ec835b0-a5a2-4a65-ad57-8282ba92fc1c\") " pod="openstack/keystone-3ac0-account-create-update-s57tn" Jan 27 10:09:22 crc kubenswrapper[4869]: I0127 10:09:22.922597 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4dqbq" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.033376 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-wwrvj"] Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.035422 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wwrvj" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.061454 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wwrvj"] Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.068200 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3ac0-account-create-update-s57tn" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.118251 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-dc7e-account-create-update-dsvtj"] Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.119601 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc7e-account-create-update-dsvtj" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.121851 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.134516 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dc7e-account-create-update-dsvtj"] Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.178739 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d604b62-cdb4-4227-997f-defd9a3ca643-operator-scripts\") pod \"placement-db-create-wwrvj\" (UID: \"1d604b62-cdb4-4227-997f-defd9a3ca643\") " pod="openstack/placement-db-create-wwrvj" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.178874 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbmhm\" (UniqueName: \"kubernetes.io/projected/1d604b62-cdb4-4227-997f-defd9a3ca643-kube-api-access-jbmhm\") pod \"placement-db-create-wwrvj\" (UID: \"1d604b62-cdb4-4227-997f-defd9a3ca643\") " pod="openstack/placement-db-create-wwrvj" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.178936 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/110c6611-8d0d-4f46-94a3-eab1a21743e9-operator-scripts\") pod \"placement-dc7e-account-create-update-dsvtj\" (UID: \"110c6611-8d0d-4f46-94a3-eab1a21743e9\") " pod="openstack/placement-dc7e-account-create-update-dsvtj" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.178967 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db7qg\" (UniqueName: \"kubernetes.io/projected/110c6611-8d0d-4f46-94a3-eab1a21743e9-kube-api-access-db7qg\") pod \"placement-dc7e-account-create-update-dsvtj\" (UID: \"110c6611-8d0d-4f46-94a3-eab1a21743e9\") " pod="openstack/placement-dc7e-account-create-update-dsvtj" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.217425 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-cx2rm"] Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.218764 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cx2rm" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.235859 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cx2rm"] Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.280725 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/110c6611-8d0d-4f46-94a3-eab1a21743e9-operator-scripts\") pod \"placement-dc7e-account-create-update-dsvtj\" (UID: \"110c6611-8d0d-4f46-94a3-eab1a21743e9\") " pod="openstack/placement-dc7e-account-create-update-dsvtj" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.280800 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db7qg\" (UniqueName: \"kubernetes.io/projected/110c6611-8d0d-4f46-94a3-eab1a21743e9-kube-api-access-db7qg\") pod \"placement-dc7e-account-create-update-dsvtj\" (UID: \"110c6611-8d0d-4f46-94a3-eab1a21743e9\") " pod="openstack/placement-dc7e-account-create-update-dsvtj" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.280885 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9hs2\" (UniqueName: \"kubernetes.io/projected/c5f6337c-88cf-4544-b1f8-082325ebd6db-kube-api-access-k9hs2\") pod \"glance-db-create-cx2rm\" (UID: \"c5f6337c-88cf-4544-b1f8-082325ebd6db\") " pod="openstack/glance-db-create-cx2rm" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.280952 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d604b62-cdb4-4227-997f-defd9a3ca643-operator-scripts\") pod \"placement-db-create-wwrvj\" (UID: \"1d604b62-cdb4-4227-997f-defd9a3ca643\") " pod="openstack/placement-db-create-wwrvj" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.281012 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbmhm\" (UniqueName: \"kubernetes.io/projected/1d604b62-cdb4-4227-997f-defd9a3ca643-kube-api-access-jbmhm\") pod \"placement-db-create-wwrvj\" (UID: \"1d604b62-cdb4-4227-997f-defd9a3ca643\") " pod="openstack/placement-db-create-wwrvj" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.281039 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5f6337c-88cf-4544-b1f8-082325ebd6db-operator-scripts\") pod \"glance-db-create-cx2rm\" (UID: \"c5f6337c-88cf-4544-b1f8-082325ebd6db\") " pod="openstack/glance-db-create-cx2rm" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.282310 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d604b62-cdb4-4227-997f-defd9a3ca643-operator-scripts\") pod \"placement-db-create-wwrvj\" (UID: \"1d604b62-cdb4-4227-997f-defd9a3ca643\") " pod="openstack/placement-db-create-wwrvj" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.285998 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/110c6611-8d0d-4f46-94a3-eab1a21743e9-operator-scripts\") pod \"placement-dc7e-account-create-update-dsvtj\" (UID: \"110c6611-8d0d-4f46-94a3-eab1a21743e9\") " pod="openstack/placement-dc7e-account-create-update-dsvtj" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.303627 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db7qg\" (UniqueName: \"kubernetes.io/projected/110c6611-8d0d-4f46-94a3-eab1a21743e9-kube-api-access-db7qg\") pod \"placement-dc7e-account-create-update-dsvtj\" (UID: \"110c6611-8d0d-4f46-94a3-eab1a21743e9\") " pod="openstack/placement-dc7e-account-create-update-dsvtj" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.307739 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-c2a7-account-create-update-7z2sg"] Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.309027 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c2a7-account-create-update-7z2sg" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.310941 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.314531 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c2a7-account-create-update-7z2sg"] Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.319725 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbmhm\" (UniqueName: \"kubernetes.io/projected/1d604b62-cdb4-4227-997f-defd9a3ca643-kube-api-access-jbmhm\") pod \"placement-db-create-wwrvj\" (UID: \"1d604b62-cdb4-4227-997f-defd9a3ca643\") " pod="openstack/placement-db-create-wwrvj" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.375522 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wwrvj" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.382246 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vld4w\" (UniqueName: \"kubernetes.io/projected/b314971b-a6d0-4364-9753-480190c2ef5c-kube-api-access-vld4w\") pod \"glance-c2a7-account-create-update-7z2sg\" (UID: \"b314971b-a6d0-4364-9753-480190c2ef5c\") " pod="openstack/glance-c2a7-account-create-update-7z2sg" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.382411 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5f6337c-88cf-4544-b1f8-082325ebd6db-operator-scripts\") pod \"glance-db-create-cx2rm\" (UID: \"c5f6337c-88cf-4544-b1f8-082325ebd6db\") " pod="openstack/glance-db-create-cx2rm" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.382631 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b314971b-a6d0-4364-9753-480190c2ef5c-operator-scripts\") pod \"glance-c2a7-account-create-update-7z2sg\" (UID: \"b314971b-a6d0-4364-9753-480190c2ef5c\") " pod="openstack/glance-c2a7-account-create-update-7z2sg" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.382881 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9hs2\" (UniqueName: \"kubernetes.io/projected/c5f6337c-88cf-4544-b1f8-082325ebd6db-kube-api-access-k9hs2\") pod \"glance-db-create-cx2rm\" (UID: \"c5f6337c-88cf-4544-b1f8-082325ebd6db\") " pod="openstack/glance-db-create-cx2rm" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.384039 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5f6337c-88cf-4544-b1f8-082325ebd6db-operator-scripts\") pod \"glance-db-create-cx2rm\" (UID: \"c5f6337c-88cf-4544-b1f8-082325ebd6db\") " pod="openstack/glance-db-create-cx2rm" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.399209 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9hs2\" (UniqueName: \"kubernetes.io/projected/c5f6337c-88cf-4544-b1f8-082325ebd6db-kube-api-access-k9hs2\") pod \"glance-db-create-cx2rm\" (UID: \"c5f6337c-88cf-4544-b1f8-082325ebd6db\") " pod="openstack/glance-db-create-cx2rm" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.455384 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc7e-account-create-update-dsvtj" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.484219 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vld4w\" (UniqueName: \"kubernetes.io/projected/b314971b-a6d0-4364-9753-480190c2ef5c-kube-api-access-vld4w\") pod \"glance-c2a7-account-create-update-7z2sg\" (UID: \"b314971b-a6d0-4364-9753-480190c2ef5c\") " pod="openstack/glance-c2a7-account-create-update-7z2sg" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.484347 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b314971b-a6d0-4364-9753-480190c2ef5c-operator-scripts\") pod \"glance-c2a7-account-create-update-7z2sg\" (UID: \"b314971b-a6d0-4364-9753-480190c2ef5c\") " pod="openstack/glance-c2a7-account-create-update-7z2sg" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.486232 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b314971b-a6d0-4364-9753-480190c2ef5c-operator-scripts\") pod \"glance-c2a7-account-create-update-7z2sg\" (UID: \"b314971b-a6d0-4364-9753-480190c2ef5c\") " pod="openstack/glance-c2a7-account-create-update-7z2sg" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.503551 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vld4w\" (UniqueName: \"kubernetes.io/projected/b314971b-a6d0-4364-9753-480190c2ef5c-kube-api-access-vld4w\") pod \"glance-c2a7-account-create-update-7z2sg\" (UID: \"b314971b-a6d0-4364-9753-480190c2ef5c\") " pod="openstack/glance-c2a7-account-create-update-7z2sg" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.537741 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cx2rm" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.694607 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c2a7-account-create-update-7z2sg" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.810445 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-llvbz"] Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.812669 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llvbz" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.838770 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-llvbz"] Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.890323 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62f42f44-03d4-435c-a230-78a0252fd732-catalog-content\") pod \"certified-operators-llvbz\" (UID: \"62f42f44-03d4-435c-a230-78a0252fd732\") " pod="openshift-marketplace/certified-operators-llvbz" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.890379 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62f42f44-03d4-435c-a230-78a0252fd732-utilities\") pod \"certified-operators-llvbz\" (UID: \"62f42f44-03d4-435c-a230-78a0252fd732\") " pod="openshift-marketplace/certified-operators-llvbz" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.890407 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6hjh\" (UniqueName: \"kubernetes.io/projected/62f42f44-03d4-435c-a230-78a0252fd732-kube-api-access-d6hjh\") pod \"certified-operators-llvbz\" (UID: \"62f42f44-03d4-435c-a230-78a0252fd732\") " pod="openshift-marketplace/certified-operators-llvbz" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.992446 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.992490 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62f42f44-03d4-435c-a230-78a0252fd732-catalog-content\") pod \"certified-operators-llvbz\" (UID: \"62f42f44-03d4-435c-a230-78a0252fd732\") " pod="openshift-marketplace/certified-operators-llvbz" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.992544 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62f42f44-03d4-435c-a230-78a0252fd732-utilities\") pod \"certified-operators-llvbz\" (UID: \"62f42f44-03d4-435c-a230-78a0252fd732\") " pod="openshift-marketplace/certified-operators-llvbz" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.992581 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6hjh\" (UniqueName: \"kubernetes.io/projected/62f42f44-03d4-435c-a230-78a0252fd732-kube-api-access-d6hjh\") pod \"certified-operators-llvbz\" (UID: \"62f42f44-03d4-435c-a230-78a0252fd732\") " pod="openshift-marketplace/certified-operators-llvbz" Jan 27 10:09:23 crc kubenswrapper[4869]: E0127 10:09:23.992603 4869 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 10:09:23 crc kubenswrapper[4869]: E0127 10:09:23.992637 4869 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 10:09:23 crc kubenswrapper[4869]: E0127 10:09:23.992681 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift podName:0429a74c-af6a-45f1-9ca2-b66dcd47ca38 nodeName:}" failed. No retries permitted until 2026-01-27 10:09:31.992665631 +0000 UTC m=+940.613089714 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift") pod "swift-storage-0" (UID: "0429a74c-af6a-45f1-9ca2-b66dcd47ca38") : configmap "swift-ring-files" not found Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.993407 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62f42f44-03d4-435c-a230-78a0252fd732-catalog-content\") pod \"certified-operators-llvbz\" (UID: \"62f42f44-03d4-435c-a230-78a0252fd732\") " pod="openshift-marketplace/certified-operators-llvbz" Jan 27 10:09:23 crc kubenswrapper[4869]: I0127 10:09:23.993508 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62f42f44-03d4-435c-a230-78a0252fd732-utilities\") pod \"certified-operators-llvbz\" (UID: \"62f42f44-03d4-435c-a230-78a0252fd732\") " pod="openshift-marketplace/certified-operators-llvbz" Jan 27 10:09:24 crc kubenswrapper[4869]: I0127 10:09:24.013259 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6hjh\" (UniqueName: \"kubernetes.io/projected/62f42f44-03d4-435c-a230-78a0252fd732-kube-api-access-d6hjh\") pod \"certified-operators-llvbz\" (UID: \"62f42f44-03d4-435c-a230-78a0252fd732\") " pod="openshift-marketplace/certified-operators-llvbz" Jan 27 10:09:24 crc kubenswrapper[4869]: I0127 10:09:24.151979 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llvbz" Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.264367 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k9lxv" Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.318852 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/615c41f2-860d-4920-9b46-133877ae2067-operator-scripts\") pod \"615c41f2-860d-4920-9b46-133877ae2067\" (UID: \"615c41f2-860d-4920-9b46-133877ae2067\") " Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.319181 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn2qd\" (UniqueName: \"kubernetes.io/projected/615c41f2-860d-4920-9b46-133877ae2067-kube-api-access-kn2qd\") pod \"615c41f2-860d-4920-9b46-133877ae2067\" (UID: \"615c41f2-860d-4920-9b46-133877ae2067\") " Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.321304 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/615c41f2-860d-4920-9b46-133877ae2067-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "615c41f2-860d-4920-9b46-133877ae2067" (UID: "615c41f2-860d-4920-9b46-133877ae2067"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.327273 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/615c41f2-860d-4920-9b46-133877ae2067-kube-api-access-kn2qd" (OuterVolumeSpecName: "kube-api-access-kn2qd") pod "615c41f2-860d-4920-9b46-133877ae2067" (UID: "615c41f2-860d-4920-9b46-133877ae2067"). InnerVolumeSpecName "kube-api-access-kn2qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.340206 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.409466 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6dhv4"] Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.409938 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" podUID="0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5" containerName="dnsmasq-dns" containerID="cri-o://85dbb6fd84162640527cdac258cc1cd28e8446189b73e3f09660d6676dbe770e" gracePeriod=10 Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.421277 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/615c41f2-860d-4920-9b46-133877ae2067-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.421308 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn2qd\" (UniqueName: \"kubernetes.io/projected/615c41f2-860d-4920-9b46-133877ae2067-kube-api-access-kn2qd\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.786096 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nn56w" event={"ID":"f91198cd-1581-4ca7-9be2-98da975eefd7","Type":"ContainerStarted","Data":"5e3f6a1fea8627f660cb563244c9b85d25922d8b24f60fa4fb0dc13c0b6bc0b5"} Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.793787 4869 generic.go:334] "Generic (PLEG): container finished" podID="0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5" containerID="85dbb6fd84162640527cdac258cc1cd28e8446189b73e3f09660d6676dbe770e" exitCode=0 Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.793897 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" event={"ID":"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5","Type":"ContainerDied","Data":"85dbb6fd84162640527cdac258cc1cd28e8446189b73e3f09660d6676dbe770e"} Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.796542 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k9lxv" event={"ID":"615c41f2-860d-4920-9b46-133877ae2067","Type":"ContainerDied","Data":"fb0a8ebb24cfc34923232b03663ceb22ec2038c630242de9fa8e0d1137c60d00"} Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.796566 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb0a8ebb24cfc34923232b03663ceb22ec2038c630242de9fa8e0d1137c60d00" Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.796590 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k9lxv" Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.810525 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-nn56w" podStartSLOduration=1.269544042 podStartE2EDuration="5.810503984s" podCreationTimestamp="2026-01-27 10:09:20 +0000 UTC" firstStartedPulling="2026-01-27 10:09:20.870921243 +0000 UTC m=+929.491345326" lastFinishedPulling="2026-01-27 10:09:25.411881185 +0000 UTC m=+934.032305268" observedRunningTime="2026-01-27 10:09:25.802697607 +0000 UTC m=+934.423121690" watchObservedRunningTime="2026-01-27 10:09:25.810503984 +0000 UTC m=+934.430928077" Jan 27 10:09:25 crc kubenswrapper[4869]: I0127 10:09:25.838596 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hcxqz"] Jan 27 10:09:25 crc kubenswrapper[4869]: W0127 10:09:25.844489 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8954ce1_4dee_4849_b0a5_26461590a6a0.slice/crio-44edcedd4567d919c51fc596bce98478cbda7143df322613c1504e4eed5a4971 WatchSource:0}: Error finding container 44edcedd4567d919c51fc596bce98478cbda7143df322613c1504e4eed5a4971: Status 404 returned error can't find the container with id 44edcedd4567d919c51fc596bce98478cbda7143df322613c1504e4eed5a4971 Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.269498 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wwrvj"] Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.283849 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4dqbq"] Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.292203 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-llvbz"] Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.315524 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c2a7-account-create-update-7z2sg"] Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.338049 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cx2rm"] Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.363929 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3ac0-account-create-update-s57tn"] Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.373501 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dc7e-account-create-update-dsvtj"] Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.506666 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.651995 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-config\") pod \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.652239 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-ovsdbserver-nb\") pod \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.652363 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-ovsdbserver-sb\") pod \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.652413 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h455t\" (UniqueName: \"kubernetes.io/projected/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-kube-api-access-h455t\") pod \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.652475 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-dns-svc\") pod \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\" (UID: \"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5\") " Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.668500 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-kube-api-access-h455t" (OuterVolumeSpecName: "kube-api-access-h455t") pod "0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5" (UID: "0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5"). InnerVolumeSpecName "kube-api-access-h455t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.722094 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5" (UID: "0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.754886 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h455t\" (UniqueName: \"kubernetes.io/projected/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-kube-api-access-h455t\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.754914 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.805908 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c2a7-account-create-update-7z2sg" event={"ID":"b314971b-a6d0-4364-9753-480190c2ef5c","Type":"ContainerStarted","Data":"576831cf98d2ed1fc90b092efc4eb5405cc34706e3553be42afee52112943a6d"} Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.809565 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" event={"ID":"0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5","Type":"ContainerDied","Data":"b1d6053e8f7ff4e048fca544bd5cab73bcab0365e8f51925f7af5ed51fbf3a14"} Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.809612 4869 scope.go:117] "RemoveContainer" containerID="85dbb6fd84162640527cdac258cc1cd28e8446189b73e3f09660d6676dbe770e" Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.809647 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-6dhv4" Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.813219 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4dqbq" event={"ID":"5ada9983-506d-4de8-9d7d-8f7fc1bcb50f","Type":"ContainerStarted","Data":"5d5e4ee6fc95efc75e499a0e28e6e1c362385f8579a8d8abf1119385e3244f3f"} Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.813265 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4dqbq" event={"ID":"5ada9983-506d-4de8-9d7d-8f7fc1bcb50f","Type":"ContainerStarted","Data":"c9772c12c806fd14395fe3787a598267220756c4be1f0a41358b148da5a09b15"} Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.817470 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5" (UID: "0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.818066 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc7e-account-create-update-dsvtj" event={"ID":"110c6611-8d0d-4f46-94a3-eab1a21743e9","Type":"ContainerStarted","Data":"a5ad036308ca48df83b3b6d22a13df1ddfddadf82971bc2f2101352c9c3467bd"} Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.819301 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3ac0-account-create-update-s57tn" event={"ID":"9ec835b0-a5a2-4a65-ad57-8282ba92fc1c","Type":"ContainerStarted","Data":"62950fcb8b68056e29a8d461929c7582581eb2f112fd6a4fe82a3513ffc4e8b1"} Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.819346 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3ac0-account-create-update-s57tn" event={"ID":"9ec835b0-a5a2-4a65-ad57-8282ba92fc1c","Type":"ContainerStarted","Data":"4ad5905e78af446cc68503dfc69daabb67cc497d82a692c3c3518a001054c793"} Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.821913 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wwrvj" event={"ID":"1d604b62-cdb4-4227-997f-defd9a3ca643","Type":"ContainerStarted","Data":"b61a1962a0627adec90528afcb14548e6facb43632ff91879ca76fa5329e37a9"} Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.821952 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wwrvj" event={"ID":"1d604b62-cdb4-4227-997f-defd9a3ca643","Type":"ContainerStarted","Data":"5570543bdf459cf5b332018a25328048d8d2d4fdf8855ff50576e6325edff5cc"} Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.823074 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cx2rm" event={"ID":"c5f6337c-88cf-4544-b1f8-082325ebd6db","Type":"ContainerStarted","Data":"113b17db5d8d3328a1cbec18d180ea1f651386583fff61080229d02f8deca43e"} Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.823100 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cx2rm" event={"ID":"c5f6337c-88cf-4544-b1f8-082325ebd6db","Type":"ContainerStarted","Data":"a453e5c57915cb1d1c4059ba0750f39a4b0e18f80adeb8f7da8b91ea5928bd94"} Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.826021 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvbz" event={"ID":"62f42f44-03d4-435c-a230-78a0252fd732","Type":"ContainerStarted","Data":"4517bd6537c7e7ad10f47a1f37b1c5368f2c25ac83a23c3e886b2736906f8c7f"} Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.826053 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvbz" event={"ID":"62f42f44-03d4-435c-a230-78a0252fd732","Type":"ContainerStarted","Data":"009cdbfcfd4d68627415c7a7e1e7a368840a4594100cb8c6bc531b5e62a24eb7"} Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.827719 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-4dqbq" podStartSLOduration=4.827709452 podStartE2EDuration="4.827709452s" podCreationTimestamp="2026-01-27 10:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:09:26.827011349 +0000 UTC m=+935.447435432" watchObservedRunningTime="2026-01-27 10:09:26.827709452 +0000 UTC m=+935.448133535" Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.828092 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-config" (OuterVolumeSpecName: "config") pod "0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5" (UID: "0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.832027 4869 scope.go:117] "RemoveContainer" containerID="d3740dfb68c44ffb7fffb66a92bd29a92913023df008bc7641d6d225c77634aa" Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.845176 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcxqz" event={"ID":"a8954ce1-4dee-4849-b0a5-26461590a6a0","Type":"ContainerStarted","Data":"ddc4f37d884745e486f6c5ca1ed8bee53e9e70a9fc217c4406083ef97960298a"} Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.845214 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcxqz" event={"ID":"a8954ce1-4dee-4849-b0a5-26461590a6a0","Type":"ContainerStarted","Data":"44edcedd4567d919c51fc596bce98478cbda7143df322613c1504e4eed5a4971"} Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.864929 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-cx2rm" podStartSLOduration=3.864905746 podStartE2EDuration="3.864905746s" podCreationTimestamp="2026-01-27 10:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:09:26.860668301 +0000 UTC m=+935.481092384" watchObservedRunningTime="2026-01-27 10:09:26.864905746 +0000 UTC m=+935.485329829" Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.870464 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-config\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.870498 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.881045 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-wwrvj" podStartSLOduration=4.8810252179999996 podStartE2EDuration="4.881025218s" podCreationTimestamp="2026-01-27 10:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:09:26.873571723 +0000 UTC m=+935.493995806" watchObservedRunningTime="2026-01-27 10:09:26.881025218 +0000 UTC m=+935.501449301" Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.887860 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5" (UID: "0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:26 crc kubenswrapper[4869]: I0127 10:09:26.972638 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:27 crc kubenswrapper[4869]: I0127 10:09:27.230631 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6dhv4"] Jan 27 10:09:27 crc kubenswrapper[4869]: I0127 10:09:27.241958 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-6dhv4"] Jan 27 10:09:27 crc kubenswrapper[4869]: I0127 10:09:27.852407 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" containerID="f467478ea01cc678f7c6abc730ff8cc244d20d6520b05fbe2af67046c78142ce" exitCode=0 Jan 27 10:09:27 crc kubenswrapper[4869]: I0127 10:09:27.852482 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerDied","Data":"f467478ea01cc678f7c6abc730ff8cc244d20d6520b05fbe2af67046c78142ce"} Jan 27 10:09:27 crc kubenswrapper[4869]: I0127 10:09:27.854524 4869 generic.go:334] "Generic (PLEG): container finished" podID="a8954ce1-4dee-4849-b0a5-26461590a6a0" containerID="ddc4f37d884745e486f6c5ca1ed8bee53e9e70a9fc217c4406083ef97960298a" exitCode=0 Jan 27 10:09:27 crc kubenswrapper[4869]: I0127 10:09:27.854604 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcxqz" event={"ID":"a8954ce1-4dee-4849-b0a5-26461590a6a0","Type":"ContainerDied","Data":"ddc4f37d884745e486f6c5ca1ed8bee53e9e70a9fc217c4406083ef97960298a"} Jan 27 10:09:27 crc kubenswrapper[4869]: I0127 10:09:27.856468 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc7e-account-create-update-dsvtj" event={"ID":"110c6611-8d0d-4f46-94a3-eab1a21743e9","Type":"ContainerStarted","Data":"5ff0694b17b24b23a78b92a14b7f964c9d85d21f09005a4787f65cccf15f6c2c"} Jan 27 10:09:27 crc kubenswrapper[4869]: I0127 10:09:27.858433 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c2a7-account-create-update-7z2sg" event={"ID":"b314971b-a6d0-4364-9753-480190c2ef5c","Type":"ContainerStarted","Data":"e239c2839d03971d82950ff237289596a2d933c8b9cb5698fb7440e4ec1e4993"} Jan 27 10:09:27 crc kubenswrapper[4869]: I0127 10:09:27.922463 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-3ac0-account-create-update-s57tn" podStartSLOduration=5.922436355 podStartE2EDuration="5.922436355s" podCreationTimestamp="2026-01-27 10:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:09:27.908631713 +0000 UTC m=+936.529055836" watchObservedRunningTime="2026-01-27 10:09:27.922436355 +0000 UTC m=+936.542860468" Jan 27 10:09:27 crc kubenswrapper[4869]: I0127 10:09:27.934988 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-dc7e-account-create-update-dsvtj" podStartSLOduration=4.934966765 podStartE2EDuration="4.934966765s" podCreationTimestamp="2026-01-27 10:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:09:27.927658125 +0000 UTC m=+936.548082218" watchObservedRunningTime="2026-01-27 10:09:27.934966765 +0000 UTC m=+936.555390858" Jan 27 10:09:27 crc kubenswrapper[4869]: I0127 10:09:27.978191 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-57j2k" Jan 27 10:09:27 crc kubenswrapper[4869]: I0127 10:09:27.978257 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-57j2k" Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.043496 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5" path="/var/lib/kubelet/pods/0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5/volumes" Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.877097 4869 generic.go:334] "Generic (PLEG): container finished" podID="9ec835b0-a5a2-4a65-ad57-8282ba92fc1c" containerID="62950fcb8b68056e29a8d461929c7582581eb2f112fd6a4fe82a3513ffc4e8b1" exitCode=0 Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.877167 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3ac0-account-create-update-s57tn" event={"ID":"9ec835b0-a5a2-4a65-ad57-8282ba92fc1c","Type":"ContainerDied","Data":"62950fcb8b68056e29a8d461929c7582581eb2f112fd6a4fe82a3513ffc4e8b1"} Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.879401 4869 generic.go:334] "Generic (PLEG): container finished" podID="1d604b62-cdb4-4227-997f-defd9a3ca643" containerID="b61a1962a0627adec90528afcb14548e6facb43632ff91879ca76fa5329e37a9" exitCode=0 Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.879509 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wwrvj" event={"ID":"1d604b62-cdb4-4227-997f-defd9a3ca643","Type":"ContainerDied","Data":"b61a1962a0627adec90528afcb14548e6facb43632ff91879ca76fa5329e37a9"} Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.880826 4869 generic.go:334] "Generic (PLEG): container finished" podID="c5f6337c-88cf-4544-b1f8-082325ebd6db" containerID="113b17db5d8d3328a1cbec18d180ea1f651386583fff61080229d02f8deca43e" exitCode=0 Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.880879 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cx2rm" event={"ID":"c5f6337c-88cf-4544-b1f8-082325ebd6db","Type":"ContainerDied","Data":"113b17db5d8d3328a1cbec18d180ea1f651386583fff61080229d02f8deca43e"} Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.882280 4869 generic.go:334] "Generic (PLEG): container finished" podID="b314971b-a6d0-4364-9753-480190c2ef5c" containerID="e239c2839d03971d82950ff237289596a2d933c8b9cb5698fb7440e4ec1e4993" exitCode=0 Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.882313 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c2a7-account-create-update-7z2sg" event={"ID":"b314971b-a6d0-4364-9753-480190c2ef5c","Type":"ContainerDied","Data":"e239c2839d03971d82950ff237289596a2d933c8b9cb5698fb7440e4ec1e4993"} Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.884318 4869 generic.go:334] "Generic (PLEG): container finished" podID="62f42f44-03d4-435c-a230-78a0252fd732" containerID="4517bd6537c7e7ad10f47a1f37b1c5368f2c25ac83a23c3e886b2736906f8c7f" exitCode=0 Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.884409 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvbz" event={"ID":"62f42f44-03d4-435c-a230-78a0252fd732","Type":"ContainerDied","Data":"4517bd6537c7e7ad10f47a1f37b1c5368f2c25ac83a23c3e886b2736906f8c7f"} Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.885686 4869 generic.go:334] "Generic (PLEG): container finished" podID="5ada9983-506d-4de8-9d7d-8f7fc1bcb50f" containerID="5d5e4ee6fc95efc75e499a0e28e6e1c362385f8579a8d8abf1119385e3244f3f" exitCode=0 Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.885729 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4dqbq" event={"ID":"5ada9983-506d-4de8-9d7d-8f7fc1bcb50f","Type":"ContainerDied","Data":"5d5e4ee6fc95efc75e499a0e28e6e1c362385f8579a8d8abf1119385e3244f3f"} Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.888242 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerStarted","Data":"ecf4f869544f31eb663b1546d15e3e45aee03030a67919efec42fcedf1ff7c84"} Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.888950 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.893082 4869 generic.go:334] "Generic (PLEG): container finished" podID="110c6611-8d0d-4f46-94a3-eab1a21743e9" containerID="5ff0694b17b24b23a78b92a14b7f964c9d85d21f09005a4787f65cccf15f6c2c" exitCode=0 Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.893176 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc7e-account-create-update-dsvtj" event={"ID":"110c6611-8d0d-4f46-94a3-eab1a21743e9","Type":"ContainerDied","Data":"5ff0694b17b24b23a78b92a14b7f964c9d85d21f09005a4787f65cccf15f6c2c"} Jan 27 10:09:28 crc kubenswrapper[4869]: I0127 10:09:28.926718 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.137231506 podStartE2EDuration="50.926701062s" podCreationTimestamp="2026-01-27 10:08:38 +0000 UTC" firstStartedPulling="2026-01-27 10:08:46.416953847 +0000 UTC m=+895.037377930" lastFinishedPulling="2026-01-27 10:08:54.206423403 +0000 UTC m=+902.826847486" observedRunningTime="2026-01-27 10:09:28.92229508 +0000 UTC m=+937.542719193" watchObservedRunningTime="2026-01-27 10:09:28.926701062 +0000 UTC m=+937.547125145" Jan 27 10:09:29 crc kubenswrapper[4869]: I0127 10:09:29.034066 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-57j2k" podUID="d2064d31-adb6-40dd-9bb8-c05cb35b3519" containerName="registry-server" probeResult="failure" output=< Jan 27 10:09:29 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Jan 27 10:09:29 crc kubenswrapper[4869]: > Jan 27 10:09:29 crc kubenswrapper[4869]: I0127 10:09:29.910049 4869 generic.go:334] "Generic (PLEG): container finished" podID="a8954ce1-4dee-4849-b0a5-26461590a6a0" containerID="848f8dd663af11f4867b80c8b4d46b3564a5100e3ef1b4b94f4f931655642b61" exitCode=0 Jan 27 10:09:29 crc kubenswrapper[4869]: I0127 10:09:29.910111 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcxqz" event={"ID":"a8954ce1-4dee-4849-b0a5-26461590a6a0","Type":"ContainerDied","Data":"848f8dd663af11f4867b80c8b4d46b3564a5100e3ef1b4b94f4f931655642b61"} Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.333052 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4dqbq" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.443682 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zp4l\" (UniqueName: \"kubernetes.io/projected/5ada9983-506d-4de8-9d7d-8f7fc1bcb50f-kube-api-access-7zp4l\") pod \"5ada9983-506d-4de8-9d7d-8f7fc1bcb50f\" (UID: \"5ada9983-506d-4de8-9d7d-8f7fc1bcb50f\") " Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.444191 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ada9983-506d-4de8-9d7d-8f7fc1bcb50f-operator-scripts\") pod \"5ada9983-506d-4de8-9d7d-8f7fc1bcb50f\" (UID: \"5ada9983-506d-4de8-9d7d-8f7fc1bcb50f\") " Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.444975 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ada9983-506d-4de8-9d7d-8f7fc1bcb50f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ada9983-506d-4de8-9d7d-8f7fc1bcb50f" (UID: "5ada9983-506d-4de8-9d7d-8f7fc1bcb50f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.450757 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ada9983-506d-4de8-9d7d-8f7fc1bcb50f-kube-api-access-7zp4l" (OuterVolumeSpecName: "kube-api-access-7zp4l") pod "5ada9983-506d-4de8-9d7d-8f7fc1bcb50f" (UID: "5ada9983-506d-4de8-9d7d-8f7fc1bcb50f"). InnerVolumeSpecName "kube-api-access-7zp4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.516936 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cx2rm" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.522118 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wwrvj" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.527891 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc7e-account-create-update-dsvtj" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.537953 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3ac0-account-create-update-s57tn" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.545645 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ada9983-506d-4de8-9d7d-8f7fc1bcb50f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.545673 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zp4l\" (UniqueName: \"kubernetes.io/projected/5ada9983-506d-4de8-9d7d-8f7fc1bcb50f-kube-api-access-7zp4l\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.547255 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c2a7-account-create-update-7z2sg" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.646503 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbmhm\" (UniqueName: \"kubernetes.io/projected/1d604b62-cdb4-4227-997f-defd9a3ca643-kube-api-access-jbmhm\") pod \"1d604b62-cdb4-4227-997f-defd9a3ca643\" (UID: \"1d604b62-cdb4-4227-997f-defd9a3ca643\") " Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.646560 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vld4w\" (UniqueName: \"kubernetes.io/projected/b314971b-a6d0-4364-9753-480190c2ef5c-kube-api-access-vld4w\") pod \"b314971b-a6d0-4364-9753-480190c2ef5c\" (UID: \"b314971b-a6d0-4364-9753-480190c2ef5c\") " Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.646606 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d604b62-cdb4-4227-997f-defd9a3ca643-operator-scripts\") pod \"1d604b62-cdb4-4227-997f-defd9a3ca643\" (UID: \"1d604b62-cdb4-4227-997f-defd9a3ca643\") " Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.646655 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/110c6611-8d0d-4f46-94a3-eab1a21743e9-operator-scripts\") pod \"110c6611-8d0d-4f46-94a3-eab1a21743e9\" (UID: \"110c6611-8d0d-4f46-94a3-eab1a21743e9\") " Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.646690 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2tff\" (UniqueName: \"kubernetes.io/projected/9ec835b0-a5a2-4a65-ad57-8282ba92fc1c-kube-api-access-r2tff\") pod \"9ec835b0-a5a2-4a65-ad57-8282ba92fc1c\" (UID: \"9ec835b0-a5a2-4a65-ad57-8282ba92fc1c\") " Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.646712 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ec835b0-a5a2-4a65-ad57-8282ba92fc1c-operator-scripts\") pod \"9ec835b0-a5a2-4a65-ad57-8282ba92fc1c\" (UID: \"9ec835b0-a5a2-4a65-ad57-8282ba92fc1c\") " Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.646753 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9hs2\" (UniqueName: \"kubernetes.io/projected/c5f6337c-88cf-4544-b1f8-082325ebd6db-kube-api-access-k9hs2\") pod \"c5f6337c-88cf-4544-b1f8-082325ebd6db\" (UID: \"c5f6337c-88cf-4544-b1f8-082325ebd6db\") " Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.646804 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db7qg\" (UniqueName: \"kubernetes.io/projected/110c6611-8d0d-4f46-94a3-eab1a21743e9-kube-api-access-db7qg\") pod \"110c6611-8d0d-4f46-94a3-eab1a21743e9\" (UID: \"110c6611-8d0d-4f46-94a3-eab1a21743e9\") " Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.646865 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b314971b-a6d0-4364-9753-480190c2ef5c-operator-scripts\") pod \"b314971b-a6d0-4364-9753-480190c2ef5c\" (UID: \"b314971b-a6d0-4364-9753-480190c2ef5c\") " Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.646965 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5f6337c-88cf-4544-b1f8-082325ebd6db-operator-scripts\") pod \"c5f6337c-88cf-4544-b1f8-082325ebd6db\" (UID: \"c5f6337c-88cf-4544-b1f8-082325ebd6db\") " Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.647963 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f6337c-88cf-4544-b1f8-082325ebd6db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5f6337c-88cf-4544-b1f8-082325ebd6db" (UID: "c5f6337c-88cf-4544-b1f8-082325ebd6db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.648791 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ec835b0-a5a2-4a65-ad57-8282ba92fc1c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9ec835b0-a5a2-4a65-ad57-8282ba92fc1c" (UID: "9ec835b0-a5a2-4a65-ad57-8282ba92fc1c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.650373 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b314971b-a6d0-4364-9753-480190c2ef5c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b314971b-a6d0-4364-9753-480190c2ef5c" (UID: "b314971b-a6d0-4364-9753-480190c2ef5c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.650812 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d604b62-cdb4-4227-997f-defd9a3ca643-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d604b62-cdb4-4227-997f-defd9a3ca643" (UID: "1d604b62-cdb4-4227-997f-defd9a3ca643"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.651125 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/110c6611-8d0d-4f46-94a3-eab1a21743e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "110c6611-8d0d-4f46-94a3-eab1a21743e9" (UID: "110c6611-8d0d-4f46-94a3-eab1a21743e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.652334 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ec835b0-a5a2-4a65-ad57-8282ba92fc1c-kube-api-access-r2tff" (OuterVolumeSpecName: "kube-api-access-r2tff") pod "9ec835b0-a5a2-4a65-ad57-8282ba92fc1c" (UID: "9ec835b0-a5a2-4a65-ad57-8282ba92fc1c"). InnerVolumeSpecName "kube-api-access-r2tff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.653465 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b314971b-a6d0-4364-9753-480190c2ef5c-kube-api-access-vld4w" (OuterVolumeSpecName: "kube-api-access-vld4w") pod "b314971b-a6d0-4364-9753-480190c2ef5c" (UID: "b314971b-a6d0-4364-9753-480190c2ef5c"). InnerVolumeSpecName "kube-api-access-vld4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.653798 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d604b62-cdb4-4227-997f-defd9a3ca643-kube-api-access-jbmhm" (OuterVolumeSpecName: "kube-api-access-jbmhm") pod "1d604b62-cdb4-4227-997f-defd9a3ca643" (UID: "1d604b62-cdb4-4227-997f-defd9a3ca643"). InnerVolumeSpecName "kube-api-access-jbmhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.654144 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/110c6611-8d0d-4f46-94a3-eab1a21743e9-kube-api-access-db7qg" (OuterVolumeSpecName: "kube-api-access-db7qg") pod "110c6611-8d0d-4f46-94a3-eab1a21743e9" (UID: "110c6611-8d0d-4f46-94a3-eab1a21743e9"). InnerVolumeSpecName "kube-api-access-db7qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.656527 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f6337c-88cf-4544-b1f8-082325ebd6db-kube-api-access-k9hs2" (OuterVolumeSpecName: "kube-api-access-k9hs2") pod "c5f6337c-88cf-4544-b1f8-082325ebd6db" (UID: "c5f6337c-88cf-4544-b1f8-082325ebd6db"). InnerVolumeSpecName "kube-api-access-k9hs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.748753 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b314971b-a6d0-4364-9753-480190c2ef5c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.748793 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5f6337c-88cf-4544-b1f8-082325ebd6db-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.748805 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbmhm\" (UniqueName: \"kubernetes.io/projected/1d604b62-cdb4-4227-997f-defd9a3ca643-kube-api-access-jbmhm\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.748817 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vld4w\" (UniqueName: \"kubernetes.io/projected/b314971b-a6d0-4364-9753-480190c2ef5c-kube-api-access-vld4w\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.748847 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d604b62-cdb4-4227-997f-defd9a3ca643-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.748857 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/110c6611-8d0d-4f46-94a3-eab1a21743e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.748867 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2tff\" (UniqueName: \"kubernetes.io/projected/9ec835b0-a5a2-4a65-ad57-8282ba92fc1c-kube-api-access-r2tff\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.748878 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ec835b0-a5a2-4a65-ad57-8282ba92fc1c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.748887 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9hs2\" (UniqueName: \"kubernetes.io/projected/c5f6337c-88cf-4544-b1f8-082325ebd6db-kube-api-access-k9hs2\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.748897 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db7qg\" (UniqueName: \"kubernetes.io/projected/110c6611-8d0d-4f46-94a3-eab1a21743e9-kube-api-access-db7qg\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.921467 4869 generic.go:334] "Generic (PLEG): container finished" podID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" containerID="4a438d6694b54e570c6899662ea865d06be06fa8ba35abc67332c2d580cd3da4" exitCode=0 Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.921533 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerDied","Data":"4a438d6694b54e570c6899662ea865d06be06fa8ba35abc67332c2d580cd3da4"} Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.924497 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc7e-account-create-update-dsvtj" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.924515 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc7e-account-create-update-dsvtj" event={"ID":"110c6611-8d0d-4f46-94a3-eab1a21743e9","Type":"ContainerDied","Data":"a5ad036308ca48df83b3b6d22a13df1ddfddadf82971bc2f2101352c9c3467bd"} Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.924847 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5ad036308ca48df83b3b6d22a13df1ddfddadf82971bc2f2101352c9c3467bd" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.929081 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4dqbq" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.928603 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4dqbq" event={"ID":"5ada9983-506d-4de8-9d7d-8f7fc1bcb50f","Type":"ContainerDied","Data":"c9772c12c806fd14395fe3787a598267220756c4be1f0a41358b148da5a09b15"} Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.929446 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9772c12c806fd14395fe3787a598267220756c4be1f0a41358b148da5a09b15" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.952401 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcxqz" event={"ID":"a8954ce1-4dee-4849-b0a5-26461590a6a0","Type":"ContainerStarted","Data":"f2727c16811354346be1819caa54cb569da9a2b2aadd83892d3a19ca01029d2a"} Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.972643 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3ac0-account-create-update-s57tn" event={"ID":"9ec835b0-a5a2-4a65-ad57-8282ba92fc1c","Type":"ContainerDied","Data":"4ad5905e78af446cc68503dfc69daabb67cc497d82a692c3c3518a001054c793"} Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.972697 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ad5905e78af446cc68503dfc69daabb67cc497d82a692c3c3518a001054c793" Jan 27 10:09:30 crc kubenswrapper[4869]: I0127 10:09:30.972782 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3ac0-account-create-update-s57tn" Jan 27 10:09:31 crc kubenswrapper[4869]: I0127 10:09:31.009314 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wwrvj" event={"ID":"1d604b62-cdb4-4227-997f-defd9a3ca643","Type":"ContainerDied","Data":"5570543bdf459cf5b332018a25328048d8d2d4fdf8855ff50576e6325edff5cc"} Jan 27 10:09:31 crc kubenswrapper[4869]: I0127 10:09:31.009557 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5570543bdf459cf5b332018a25328048d8d2d4fdf8855ff50576e6325edff5cc" Jan 27 10:09:31 crc kubenswrapper[4869]: I0127 10:09:31.009675 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wwrvj" Jan 27 10:09:31 crc kubenswrapper[4869]: I0127 10:09:31.013653 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cx2rm" event={"ID":"c5f6337c-88cf-4544-b1f8-082325ebd6db","Type":"ContainerDied","Data":"a453e5c57915cb1d1c4059ba0750f39a4b0e18f80adeb8f7da8b91ea5928bd94"} Jan 27 10:09:31 crc kubenswrapper[4869]: I0127 10:09:31.013689 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a453e5c57915cb1d1c4059ba0750f39a4b0e18f80adeb8f7da8b91ea5928bd94" Jan 27 10:09:31 crc kubenswrapper[4869]: I0127 10:09:31.013742 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cx2rm" Jan 27 10:09:31 crc kubenswrapper[4869]: I0127 10:09:31.034882 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c2a7-account-create-update-7z2sg" Jan 27 10:09:31 crc kubenswrapper[4869]: I0127 10:09:31.034882 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c2a7-account-create-update-7z2sg" event={"ID":"b314971b-a6d0-4364-9753-480190c2ef5c","Type":"ContainerDied","Data":"576831cf98d2ed1fc90b092efc4eb5405cc34706e3553be42afee52112943a6d"} Jan 27 10:09:31 crc kubenswrapper[4869]: I0127 10:09:31.035215 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="576831cf98d2ed1fc90b092efc4eb5405cc34706e3553be42afee52112943a6d" Jan 27 10:09:31 crc kubenswrapper[4869]: I0127 10:09:31.037275 4869 generic.go:334] "Generic (PLEG): container finished" podID="62f42f44-03d4-435c-a230-78a0252fd732" containerID="6ee36ec1584115bfec701a36c983d1ff6c9d2e4df6ed355b44ca9dd94c7973bc" exitCode=0 Jan 27 10:09:31 crc kubenswrapper[4869]: I0127 10:09:31.037302 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvbz" event={"ID":"62f42f44-03d4-435c-a230-78a0252fd732","Type":"ContainerDied","Data":"6ee36ec1584115bfec701a36c983d1ff6c9d2e4df6ed355b44ca9dd94c7973bc"} Jan 27 10:09:31 crc kubenswrapper[4869]: I0127 10:09:31.136475 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hcxqz" podStartSLOduration=6.675212931 podStartE2EDuration="9.136455883s" podCreationTimestamp="2026-01-27 10:09:22 +0000 UTC" firstStartedPulling="2026-01-27 10:09:27.856288561 +0000 UTC m=+936.476712644" lastFinishedPulling="2026-01-27 10:09:30.317531513 +0000 UTC m=+938.937955596" observedRunningTime="2026-01-27 10:09:31.005391165 +0000 UTC m=+939.625815248" watchObservedRunningTime="2026-01-27 10:09:31.136455883 +0000 UTC m=+939.756879966" Jan 27 10:09:31 crc kubenswrapper[4869]: I0127 10:09:31.230996 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-k9lxv"] Jan 27 10:09:31 crc kubenswrapper[4869]: I0127 10:09:31.247794 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-k9lxv"] Jan 27 10:09:32 crc kubenswrapper[4869]: I0127 10:09:32.041122 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="615c41f2-860d-4920-9b46-133877ae2067" path="/var/lib/kubelet/pods/615c41f2-860d-4920-9b46-133877ae2067/volumes" Jan 27 10:09:32 crc kubenswrapper[4869]: I0127 10:09:32.044744 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerStarted","Data":"6911e661c2c8d1d0a4b4a6efb4a3ff172d341a80fc003e751e61dfc48fa304d8"} Jan 27 10:09:32 crc kubenswrapper[4869]: I0127 10:09:32.045118 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 10:09:32 crc kubenswrapper[4869]: I0127 10:09:32.072780 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=51.440837998 podStartE2EDuration="54.072757382s" podCreationTimestamp="2026-01-27 10:08:38 +0000 UTC" firstStartedPulling="2026-01-27 10:08:54.135875946 +0000 UTC m=+902.756300029" lastFinishedPulling="2026-01-27 10:08:56.76779529 +0000 UTC m=+905.388219413" observedRunningTime="2026-01-27 10:09:32.066719445 +0000 UTC m=+940.687143548" watchObservedRunningTime="2026-01-27 10:09:32.072757382 +0000 UTC m=+940.693181465" Jan 27 10:09:32 crc kubenswrapper[4869]: I0127 10:09:32.076552 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:32 crc kubenswrapper[4869]: E0127 10:09:32.076821 4869 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 10:09:32 crc kubenswrapper[4869]: E0127 10:09:32.076869 4869 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 10:09:32 crc kubenswrapper[4869]: E0127 10:09:32.076938 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift podName:0429a74c-af6a-45f1-9ca2-b66dcd47ca38 nodeName:}" failed. No retries permitted until 2026-01-27 10:09:48.076917444 +0000 UTC m=+956.697341537 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift") pod "swift-storage-0" (UID: "0429a74c-af6a-45f1-9ca2-b66dcd47ca38") : configmap "swift-ring-files" not found Jan 27 10:09:32 crc kubenswrapper[4869]: I0127 10:09:32.732824 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hcxqz" Jan 27 10:09:32 crc kubenswrapper[4869]: I0127 10:09:32.733299 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hcxqz" Jan 27 10:09:32 crc kubenswrapper[4869]: I0127 10:09:32.778772 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hcxqz" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.055324 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvbz" event={"ID":"62f42f44-03d4-435c-a230-78a0252fd732","Type":"ContainerStarted","Data":"0aeac4901fde5c2ae190f60a0869a6c261ffecac06b384bce8bf4a1226858ee5"} Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.057152 4869 generic.go:334] "Generic (PLEG): container finished" podID="f91198cd-1581-4ca7-9be2-98da975eefd7" containerID="5e3f6a1fea8627f660cb563244c9b85d25922d8b24f60fa4fb0dc13c0b6bc0b5" exitCode=0 Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.057232 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nn56w" event={"ID":"f91198cd-1581-4ca7-9be2-98da975eefd7","Type":"ContainerDied","Data":"5e3f6a1fea8627f660cb563244c9b85d25922d8b24f60fa4fb0dc13c0b6bc0b5"} Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.059874 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" containerID="ecf4f869544f31eb663b1546d15e3e45aee03030a67919efec42fcedf1ff7c84" exitCode=0 Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.059918 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerDied","Data":"ecf4f869544f31eb663b1546d15e3e45aee03030a67919efec42fcedf1ff7c84"} Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.060724 4869 scope.go:117] "RemoveContainer" containerID="ecf4f869544f31eb663b1546d15e3e45aee03030a67919efec42fcedf1ff7c84" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.085088 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-llvbz" podStartSLOduration=6.404560943 podStartE2EDuration="10.085066323s" podCreationTimestamp="2026-01-27 10:09:23 +0000 UTC" firstStartedPulling="2026-01-27 10:09:28.885891144 +0000 UTC m=+937.506315227" lastFinishedPulling="2026-01-27 10:09:32.566396514 +0000 UTC m=+941.186820607" observedRunningTime="2026-01-27 10:09:33.083488579 +0000 UTC m=+941.703912692" watchObservedRunningTime="2026-01-27 10:09:33.085066323 +0000 UTC m=+941.705490416" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.563765 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-b9v6v"] Jan 27 10:09:33 crc kubenswrapper[4869]: E0127 10:09:33.564208 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec835b0-a5a2-4a65-ad57-8282ba92fc1c" containerName="mariadb-account-create-update" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.564221 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec835b0-a5a2-4a65-ad57-8282ba92fc1c" containerName="mariadb-account-create-update" Jan 27 10:09:33 crc kubenswrapper[4869]: E0127 10:09:33.564234 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b314971b-a6d0-4364-9753-480190c2ef5c" containerName="mariadb-account-create-update" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.564240 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b314971b-a6d0-4364-9753-480190c2ef5c" containerName="mariadb-account-create-update" Jan 27 10:09:33 crc kubenswrapper[4869]: E0127 10:09:33.564252 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d604b62-cdb4-4227-997f-defd9a3ca643" containerName="mariadb-database-create" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.564259 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d604b62-cdb4-4227-997f-defd9a3ca643" containerName="mariadb-database-create" Jan 27 10:09:33 crc kubenswrapper[4869]: E0127 10:09:33.564271 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5" containerName="init" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.564276 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5" containerName="init" Jan 27 10:09:33 crc kubenswrapper[4869]: E0127 10:09:33.564287 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="110c6611-8d0d-4f46-94a3-eab1a21743e9" containerName="mariadb-account-create-update" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.564293 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="110c6611-8d0d-4f46-94a3-eab1a21743e9" containerName="mariadb-account-create-update" Jan 27 10:09:33 crc kubenswrapper[4869]: E0127 10:09:33.564307 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="615c41f2-860d-4920-9b46-133877ae2067" containerName="mariadb-account-create-update" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.564312 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="615c41f2-860d-4920-9b46-133877ae2067" containerName="mariadb-account-create-update" Jan 27 10:09:33 crc kubenswrapper[4869]: E0127 10:09:33.564527 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ada9983-506d-4de8-9d7d-8f7fc1bcb50f" containerName="mariadb-database-create" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.564533 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ada9983-506d-4de8-9d7d-8f7fc1bcb50f" containerName="mariadb-database-create" Jan 27 10:09:33 crc kubenswrapper[4869]: E0127 10:09:33.564545 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f6337c-88cf-4544-b1f8-082325ebd6db" containerName="mariadb-database-create" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.564551 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f6337c-88cf-4544-b1f8-082325ebd6db" containerName="mariadb-database-create" Jan 27 10:09:33 crc kubenswrapper[4869]: E0127 10:09:33.564560 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5" containerName="dnsmasq-dns" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.564567 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5" containerName="dnsmasq-dns" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.564740 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="110c6611-8d0d-4f46-94a3-eab1a21743e9" containerName="mariadb-account-create-update" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.564761 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f6337c-88cf-4544-b1f8-082325ebd6db" containerName="mariadb-database-create" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.564771 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ec835b0-a5a2-4a65-ad57-8282ba92fc1c" containerName="mariadb-account-create-update" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.564781 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="615c41f2-860d-4920-9b46-133877ae2067" containerName="mariadb-account-create-update" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.564789 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b314971b-a6d0-4364-9753-480190c2ef5c" containerName="mariadb-account-create-update" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.564799 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e469c0b-9a31-4e17-ad6a-8d9abfcc91b5" containerName="dnsmasq-dns" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.564810 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ada9983-506d-4de8-9d7d-8f7fc1bcb50f" containerName="mariadb-database-create" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.564880 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d604b62-cdb4-4227-997f-defd9a3ca643" containerName="mariadb-database-create" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.565455 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b9v6v" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.567977 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.570660 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d8jm9" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.576928 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-b9v6v"] Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.710626 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-config-data\") pod \"glance-db-sync-b9v6v\" (UID: \"97d81ee5-695a-463d-8e02-30d6abcc13c3\") " pod="openstack/glance-db-sync-b9v6v" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.710698 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-db-sync-config-data\") pod \"glance-db-sync-b9v6v\" (UID: \"97d81ee5-695a-463d-8e02-30d6abcc13c3\") " pod="openstack/glance-db-sync-b9v6v" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.711132 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq8dj\" (UniqueName: \"kubernetes.io/projected/97d81ee5-695a-463d-8e02-30d6abcc13c3-kube-api-access-zq8dj\") pod \"glance-db-sync-b9v6v\" (UID: \"97d81ee5-695a-463d-8e02-30d6abcc13c3\") " pod="openstack/glance-db-sync-b9v6v" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.711287 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-combined-ca-bundle\") pod \"glance-db-sync-b9v6v\" (UID: \"97d81ee5-695a-463d-8e02-30d6abcc13c3\") " pod="openstack/glance-db-sync-b9v6v" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.813649 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-config-data\") pod \"glance-db-sync-b9v6v\" (UID: \"97d81ee5-695a-463d-8e02-30d6abcc13c3\") " pod="openstack/glance-db-sync-b9v6v" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.814112 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-db-sync-config-data\") pod \"glance-db-sync-b9v6v\" (UID: \"97d81ee5-695a-463d-8e02-30d6abcc13c3\") " pod="openstack/glance-db-sync-b9v6v" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.814328 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq8dj\" (UniqueName: \"kubernetes.io/projected/97d81ee5-695a-463d-8e02-30d6abcc13c3-kube-api-access-zq8dj\") pod \"glance-db-sync-b9v6v\" (UID: \"97d81ee5-695a-463d-8e02-30d6abcc13c3\") " pod="openstack/glance-db-sync-b9v6v" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.814533 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-combined-ca-bundle\") pod \"glance-db-sync-b9v6v\" (UID: \"97d81ee5-695a-463d-8e02-30d6abcc13c3\") " pod="openstack/glance-db-sync-b9v6v" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.821508 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-combined-ca-bundle\") pod \"glance-db-sync-b9v6v\" (UID: \"97d81ee5-695a-463d-8e02-30d6abcc13c3\") " pod="openstack/glance-db-sync-b9v6v" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.821983 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-db-sync-config-data\") pod \"glance-db-sync-b9v6v\" (UID: \"97d81ee5-695a-463d-8e02-30d6abcc13c3\") " pod="openstack/glance-db-sync-b9v6v" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.822820 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-config-data\") pod \"glance-db-sync-b9v6v\" (UID: \"97d81ee5-695a-463d-8e02-30d6abcc13c3\") " pod="openstack/glance-db-sync-b9v6v" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.841055 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq8dj\" (UniqueName: \"kubernetes.io/projected/97d81ee5-695a-463d-8e02-30d6abcc13c3-kube-api-access-zq8dj\") pod \"glance-db-sync-b9v6v\" (UID: \"97d81ee5-695a-463d-8e02-30d6abcc13c3\") " pod="openstack/glance-db-sync-b9v6v" Jan 27 10:09:33 crc kubenswrapper[4869]: I0127 10:09:33.914644 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b9v6v" Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.078121 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerStarted","Data":"713cbe5ddc293222c05cc4d2e1f4a343d1d98adff94eae20f18043cdb0dd6332"} Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.078422 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.153014 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-llvbz" Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.153435 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-llvbz" Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.495334 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:34 crc kubenswrapper[4869]: W0127 10:09:34.565446 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97d81ee5_695a_463d_8e02_30d6abcc13c3.slice/crio-530c09bb9c1fc773306af705649acca3d4aa82fae7d23dac98ad62a6a279bd2f WatchSource:0}: Error finding container 530c09bb9c1fc773306af705649acca3d4aa82fae7d23dac98ad62a6a279bd2f: Status 404 returned error can't find the container with id 530c09bb9c1fc773306af705649acca3d4aa82fae7d23dac98ad62a6a279bd2f Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.573311 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-b9v6v"] Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.629570 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-dispersionconf\") pod \"f91198cd-1581-4ca7-9be2-98da975eefd7\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.629625 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qllvd\" (UniqueName: \"kubernetes.io/projected/f91198cd-1581-4ca7-9be2-98da975eefd7-kube-api-access-qllvd\") pod \"f91198cd-1581-4ca7-9be2-98da975eefd7\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.629750 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f91198cd-1581-4ca7-9be2-98da975eefd7-scripts\") pod \"f91198cd-1581-4ca7-9be2-98da975eefd7\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.629943 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f91198cd-1581-4ca7-9be2-98da975eefd7-etc-swift\") pod \"f91198cd-1581-4ca7-9be2-98da975eefd7\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.629993 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-swiftconf\") pod \"f91198cd-1581-4ca7-9be2-98da975eefd7\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.630017 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-combined-ca-bundle\") pod \"f91198cd-1581-4ca7-9be2-98da975eefd7\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.630064 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f91198cd-1581-4ca7-9be2-98da975eefd7-ring-data-devices\") pod \"f91198cd-1581-4ca7-9be2-98da975eefd7\" (UID: \"f91198cd-1581-4ca7-9be2-98da975eefd7\") " Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.630628 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91198cd-1581-4ca7-9be2-98da975eefd7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f91198cd-1581-4ca7-9be2-98da975eefd7" (UID: "f91198cd-1581-4ca7-9be2-98da975eefd7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.630926 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f91198cd-1581-4ca7-9be2-98da975eefd7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f91198cd-1581-4ca7-9be2-98da975eefd7" (UID: "f91198cd-1581-4ca7-9be2-98da975eefd7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.636103 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91198cd-1581-4ca7-9be2-98da975eefd7-kube-api-access-qllvd" (OuterVolumeSpecName: "kube-api-access-qllvd") pod "f91198cd-1581-4ca7-9be2-98da975eefd7" (UID: "f91198cd-1581-4ca7-9be2-98da975eefd7"). InnerVolumeSpecName "kube-api-access-qllvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.639983 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f91198cd-1581-4ca7-9be2-98da975eefd7" (UID: "f91198cd-1581-4ca7-9be2-98da975eefd7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.660844 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f91198cd-1581-4ca7-9be2-98da975eefd7" (UID: "f91198cd-1581-4ca7-9be2-98da975eefd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.661679 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f91198cd-1581-4ca7-9be2-98da975eefd7" (UID: "f91198cd-1581-4ca7-9be2-98da975eefd7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.665224 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91198cd-1581-4ca7-9be2-98da975eefd7-scripts" (OuterVolumeSpecName: "scripts") pod "f91198cd-1581-4ca7-9be2-98da975eefd7" (UID: "f91198cd-1581-4ca7-9be2-98da975eefd7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.731451 4869 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.731484 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qllvd\" (UniqueName: \"kubernetes.io/projected/f91198cd-1581-4ca7-9be2-98da975eefd7-kube-api-access-qllvd\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.731498 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f91198cd-1581-4ca7-9be2-98da975eefd7-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.731508 4869 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f91198cd-1581-4ca7-9be2-98da975eefd7-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.731518 4869 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.731528 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91198cd-1581-4ca7-9be2-98da975eefd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:34 crc kubenswrapper[4869]: I0127 10:09:34.731538 4869 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f91198cd-1581-4ca7-9be2-98da975eefd7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:35 crc kubenswrapper[4869]: I0127 10:09:35.085656 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nn56w" event={"ID":"f91198cd-1581-4ca7-9be2-98da975eefd7","Type":"ContainerDied","Data":"155bbcc8a03239f988e1631b99667bf80458a97bd88dbfe34c41eef1ca15d7a4"} Jan 27 10:09:35 crc kubenswrapper[4869]: I0127 10:09:35.085696 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="155bbcc8a03239f988e1631b99667bf80458a97bd88dbfe34c41eef1ca15d7a4" Jan 27 10:09:35 crc kubenswrapper[4869]: I0127 10:09:35.085751 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nn56w" Jan 27 10:09:35 crc kubenswrapper[4869]: I0127 10:09:35.087878 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b9v6v" event={"ID":"97d81ee5-695a-463d-8e02-30d6abcc13c3","Type":"ContainerStarted","Data":"530c09bb9c1fc773306af705649acca3d4aa82fae7d23dac98ad62a6a279bd2f"} Jan 27 10:09:35 crc kubenswrapper[4869]: I0127 10:09:35.216800 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-llvbz" podUID="62f42f44-03d4-435c-a230-78a0252fd732" containerName="registry-server" probeResult="failure" output=< Jan 27 10:09:35 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Jan 27 10:09:35 crc kubenswrapper[4869]: > Jan 27 10:09:35 crc kubenswrapper[4869]: I0127 10:09:35.997158 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.098927 4869 generic.go:334] "Generic (PLEG): container finished" podID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" containerID="6911e661c2c8d1d0a4b4a6efb4a3ff172d341a80fc003e751e61dfc48fa304d8" exitCode=0 Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.098971 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerDied","Data":"6911e661c2c8d1d0a4b4a6efb4a3ff172d341a80fc003e751e61dfc48fa304d8"} Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.099704 4869 scope.go:117] "RemoveContainer" containerID="6911e661c2c8d1d0a4b4a6efb4a3ff172d341a80fc003e751e61dfc48fa304d8" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.204581 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xv9d9"] Jan 27 10:09:36 crc kubenswrapper[4869]: E0127 10:09:36.204997 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91198cd-1581-4ca7-9be2-98da975eefd7" containerName="swift-ring-rebalance" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.205674 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91198cd-1581-4ca7-9be2-98da975eefd7" containerName="swift-ring-rebalance" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.207625 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91198cd-1581-4ca7-9be2-98da975eefd7" containerName="swift-ring-rebalance" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.209046 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xv9d9" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.216288 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xv9d9"] Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.251574 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-sl8h2"] Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.254564 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sl8h2" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.269155 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.275122 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sl8h2"] Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.359183 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8sk9\" (UniqueName: \"kubernetes.io/projected/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-kube-api-access-r8sk9\") pod \"redhat-marketplace-xv9d9\" (UID: \"1b6f6e7c-5c2a-46d0-87df-730e291ea02b\") " pod="openshift-marketplace/redhat-marketplace-xv9d9" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.359387 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b4f7c0-c984-455c-a928-9ebf3243bfe8-operator-scripts\") pod \"root-account-create-update-sl8h2\" (UID: \"34b4f7c0-c984-455c-a928-9ebf3243bfe8\") " pod="openstack/root-account-create-update-sl8h2" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.359467 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzmkb\" (UniqueName: \"kubernetes.io/projected/34b4f7c0-c984-455c-a928-9ebf3243bfe8-kube-api-access-wzmkb\") pod \"root-account-create-update-sl8h2\" (UID: \"34b4f7c0-c984-455c-a928-9ebf3243bfe8\") " pod="openstack/root-account-create-update-sl8h2" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.359522 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-catalog-content\") pod \"redhat-marketplace-xv9d9\" (UID: \"1b6f6e7c-5c2a-46d0-87df-730e291ea02b\") " pod="openshift-marketplace/redhat-marketplace-xv9d9" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.359605 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-utilities\") pod \"redhat-marketplace-xv9d9\" (UID: \"1b6f6e7c-5c2a-46d0-87df-730e291ea02b\") " pod="openshift-marketplace/redhat-marketplace-xv9d9" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.461547 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8sk9\" (UniqueName: \"kubernetes.io/projected/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-kube-api-access-r8sk9\") pod \"redhat-marketplace-xv9d9\" (UID: \"1b6f6e7c-5c2a-46d0-87df-730e291ea02b\") " pod="openshift-marketplace/redhat-marketplace-xv9d9" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.461902 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b4f7c0-c984-455c-a928-9ebf3243bfe8-operator-scripts\") pod \"root-account-create-update-sl8h2\" (UID: \"34b4f7c0-c984-455c-a928-9ebf3243bfe8\") " pod="openstack/root-account-create-update-sl8h2" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.461928 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzmkb\" (UniqueName: \"kubernetes.io/projected/34b4f7c0-c984-455c-a928-9ebf3243bfe8-kube-api-access-wzmkb\") pod \"root-account-create-update-sl8h2\" (UID: \"34b4f7c0-c984-455c-a928-9ebf3243bfe8\") " pod="openstack/root-account-create-update-sl8h2" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.461955 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-catalog-content\") pod \"redhat-marketplace-xv9d9\" (UID: \"1b6f6e7c-5c2a-46d0-87df-730e291ea02b\") " pod="openshift-marketplace/redhat-marketplace-xv9d9" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.462000 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-utilities\") pod \"redhat-marketplace-xv9d9\" (UID: \"1b6f6e7c-5c2a-46d0-87df-730e291ea02b\") " pod="openshift-marketplace/redhat-marketplace-xv9d9" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.462434 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-utilities\") pod \"redhat-marketplace-xv9d9\" (UID: \"1b6f6e7c-5c2a-46d0-87df-730e291ea02b\") " pod="openshift-marketplace/redhat-marketplace-xv9d9" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.463160 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b4f7c0-c984-455c-a928-9ebf3243bfe8-operator-scripts\") pod \"root-account-create-update-sl8h2\" (UID: \"34b4f7c0-c984-455c-a928-9ebf3243bfe8\") " pod="openstack/root-account-create-update-sl8h2" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.463498 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-catalog-content\") pod \"redhat-marketplace-xv9d9\" (UID: \"1b6f6e7c-5c2a-46d0-87df-730e291ea02b\") " pod="openshift-marketplace/redhat-marketplace-xv9d9" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.479499 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzmkb\" (UniqueName: \"kubernetes.io/projected/34b4f7c0-c984-455c-a928-9ebf3243bfe8-kube-api-access-wzmkb\") pod \"root-account-create-update-sl8h2\" (UID: \"34b4f7c0-c984-455c-a928-9ebf3243bfe8\") " pod="openstack/root-account-create-update-sl8h2" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.479807 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8sk9\" (UniqueName: \"kubernetes.io/projected/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-kube-api-access-r8sk9\") pod \"redhat-marketplace-xv9d9\" (UID: \"1b6f6e7c-5c2a-46d0-87df-730e291ea02b\") " pod="openshift-marketplace/redhat-marketplace-xv9d9" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.540936 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xv9d9" Jan 27 10:09:36 crc kubenswrapper[4869]: I0127 10:09:36.591285 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sl8h2" Jan 27 10:09:37 crc kubenswrapper[4869]: I0127 10:09:37.134321 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerStarted","Data":"a63333c886f71c4589bfb134abdcbf86ceea0ce23a1b3cc0ee0a818c84c74df8"} Jan 27 10:09:37 crc kubenswrapper[4869]: I0127 10:09:37.134534 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 10:09:37 crc kubenswrapper[4869]: I0127 10:09:37.224489 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sl8h2"] Jan 27 10:09:37 crc kubenswrapper[4869]: W0127 10:09:37.227233 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34b4f7c0_c984_455c_a928_9ebf3243bfe8.slice/crio-7866e84176bbb311e45ebe562e24e7d4eb46c808ca276a79f3ed9296f9c05067 WatchSource:0}: Error finding container 7866e84176bbb311e45ebe562e24e7d4eb46c808ca276a79f3ed9296f9c05067: Status 404 returned error can't find the container with id 7866e84176bbb311e45ebe562e24e7d4eb46c808ca276a79f3ed9296f9c05067 Jan 27 10:09:37 crc kubenswrapper[4869]: I0127 10:09:37.230381 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xv9d9"] Jan 27 10:09:38 crc kubenswrapper[4869]: I0127 10:09:38.045136 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-57j2k" Jan 27 10:09:38 crc kubenswrapper[4869]: I0127 10:09:38.095952 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-57j2k" Jan 27 10:09:38 crc kubenswrapper[4869]: I0127 10:09:38.145824 4869 generic.go:334] "Generic (PLEG): container finished" podID="1b6f6e7c-5c2a-46d0-87df-730e291ea02b" containerID="a8a588be6f3ec65e93d2ed11cde42e7b0aebeace7adf851dcb386c89657ccf53" exitCode=0 Jan 27 10:09:38 crc kubenswrapper[4869]: I0127 10:09:38.145968 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xv9d9" event={"ID":"1b6f6e7c-5c2a-46d0-87df-730e291ea02b","Type":"ContainerDied","Data":"a8a588be6f3ec65e93d2ed11cde42e7b0aebeace7adf851dcb386c89657ccf53"} Jan 27 10:09:38 crc kubenswrapper[4869]: I0127 10:09:38.146033 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xv9d9" event={"ID":"1b6f6e7c-5c2a-46d0-87df-730e291ea02b","Type":"ContainerStarted","Data":"00072f8de3145ed8c00598b1f7863910a3c868e3606e9a14e4cd536d7d949fbd"} Jan 27 10:09:38 crc kubenswrapper[4869]: I0127 10:09:38.153447 4869 generic.go:334] "Generic (PLEG): container finished" podID="34b4f7c0-c984-455c-a928-9ebf3243bfe8" containerID="6f9ba9e17522fd0c616709bd075a2b72dc9586262694908705bc5949b458c1db" exitCode=0 Jan 27 10:09:38 crc kubenswrapper[4869]: I0127 10:09:38.153864 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sl8h2" event={"ID":"34b4f7c0-c984-455c-a928-9ebf3243bfe8","Type":"ContainerDied","Data":"6f9ba9e17522fd0c616709bd075a2b72dc9586262694908705bc5949b458c1db"} Jan 27 10:09:38 crc kubenswrapper[4869]: I0127 10:09:38.153891 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sl8h2" event={"ID":"34b4f7c0-c984-455c-a928-9ebf3243bfe8","Type":"ContainerStarted","Data":"7866e84176bbb311e45ebe562e24e7d4eb46c808ca276a79f3ed9296f9c05067"} Jan 27 10:09:38 crc kubenswrapper[4869]: I0127 10:09:38.157282 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" containerID="713cbe5ddc293222c05cc4d2e1f4a343d1d98adff94eae20f18043cdb0dd6332" exitCode=0 Jan 27 10:09:38 crc kubenswrapper[4869]: I0127 10:09:38.158111 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerDied","Data":"713cbe5ddc293222c05cc4d2e1f4a343d1d98adff94eae20f18043cdb0dd6332"} Jan 27 10:09:38 crc kubenswrapper[4869]: I0127 10:09:38.158148 4869 scope.go:117] "RemoveContainer" containerID="ecf4f869544f31eb663b1546d15e3e45aee03030a67919efec42fcedf1ff7c84" Jan 27 10:09:38 crc kubenswrapper[4869]: I0127 10:09:38.159144 4869 scope.go:117] "RemoveContainer" containerID="713cbe5ddc293222c05cc4d2e1f4a343d1d98adff94eae20f18043cdb0dd6332" Jan 27 10:09:38 crc kubenswrapper[4869]: E0127 10:09:38.159312 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 10s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:09:38 crc kubenswrapper[4869]: I0127 10:09:38.502101 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qf659" podUID="e545b253-d74a-43e1-9a14-990ea5784f16" containerName="ovn-controller" probeResult="failure" output=< Jan 27 10:09:38 crc kubenswrapper[4869]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 10:09:38 crc kubenswrapper[4869]: > Jan 27 10:09:39 crc kubenswrapper[4869]: I0127 10:09:39.174369 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xv9d9" event={"ID":"1b6f6e7c-5c2a-46d0-87df-730e291ea02b","Type":"ContainerStarted","Data":"a01a46c9e0036d745b515f75508560862e3cb19a00c2f6581addb048c50b6141"} Jan 27 10:09:39 crc kubenswrapper[4869]: I0127 10:09:39.570796 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sl8h2" Jan 27 10:09:39 crc kubenswrapper[4869]: I0127 10:09:39.719520 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzmkb\" (UniqueName: \"kubernetes.io/projected/34b4f7c0-c984-455c-a928-9ebf3243bfe8-kube-api-access-wzmkb\") pod \"34b4f7c0-c984-455c-a928-9ebf3243bfe8\" (UID: \"34b4f7c0-c984-455c-a928-9ebf3243bfe8\") " Jan 27 10:09:39 crc kubenswrapper[4869]: I0127 10:09:39.719678 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b4f7c0-c984-455c-a928-9ebf3243bfe8-operator-scripts\") pod \"34b4f7c0-c984-455c-a928-9ebf3243bfe8\" (UID: \"34b4f7c0-c984-455c-a928-9ebf3243bfe8\") " Jan 27 10:09:39 crc kubenswrapper[4869]: I0127 10:09:39.720216 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b4f7c0-c984-455c-a928-9ebf3243bfe8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34b4f7c0-c984-455c-a928-9ebf3243bfe8" (UID: "34b4f7c0-c984-455c-a928-9ebf3243bfe8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:39 crc kubenswrapper[4869]: I0127 10:09:39.725094 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b4f7c0-c984-455c-a928-9ebf3243bfe8-kube-api-access-wzmkb" (OuterVolumeSpecName: "kube-api-access-wzmkb") pod "34b4f7c0-c984-455c-a928-9ebf3243bfe8" (UID: "34b4f7c0-c984-455c-a928-9ebf3243bfe8"). InnerVolumeSpecName "kube-api-access-wzmkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:39 crc kubenswrapper[4869]: I0127 10:09:39.821612 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34b4f7c0-c984-455c-a928-9ebf3243bfe8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:39 crc kubenswrapper[4869]: I0127 10:09:39.821643 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzmkb\" (UniqueName: \"kubernetes.io/projected/34b4f7c0-c984-455c-a928-9ebf3243bfe8-kube-api-access-wzmkb\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:40 crc kubenswrapper[4869]: I0127 10:09:40.187380 4869 generic.go:334] "Generic (PLEG): container finished" podID="1b6f6e7c-5c2a-46d0-87df-730e291ea02b" containerID="a01a46c9e0036d745b515f75508560862e3cb19a00c2f6581addb048c50b6141" exitCode=0 Jan 27 10:09:40 crc kubenswrapper[4869]: I0127 10:09:40.187487 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xv9d9" event={"ID":"1b6f6e7c-5c2a-46d0-87df-730e291ea02b","Type":"ContainerDied","Data":"a01a46c9e0036d745b515f75508560862e3cb19a00c2f6581addb048c50b6141"} Jan 27 10:09:40 crc kubenswrapper[4869]: I0127 10:09:40.199527 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sl8h2" event={"ID":"34b4f7c0-c984-455c-a928-9ebf3243bfe8","Type":"ContainerDied","Data":"7866e84176bbb311e45ebe562e24e7d4eb46c808ca276a79f3ed9296f9c05067"} Jan 27 10:09:40 crc kubenswrapper[4869]: I0127 10:09:40.199557 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7866e84176bbb311e45ebe562e24e7d4eb46c808ca276a79f3ed9296f9c05067" Jan 27 10:09:40 crc kubenswrapper[4869]: I0127 10:09:40.199566 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sl8h2" Jan 27 10:09:40 crc kubenswrapper[4869]: I0127 10:09:40.606593 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-57j2k"] Jan 27 10:09:40 crc kubenswrapper[4869]: I0127 10:09:40.606862 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-57j2k" podUID="d2064d31-adb6-40dd-9bb8-c05cb35b3519" containerName="registry-server" containerID="cri-o://e8f86c4616646554c52b7e073d6ec6e67f5385932c083afbd6f17249e8892242" gracePeriod=2 Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.038934 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57j2k" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.146928 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2064d31-adb6-40dd-9bb8-c05cb35b3519-utilities\") pod \"d2064d31-adb6-40dd-9bb8-c05cb35b3519\" (UID: \"d2064d31-adb6-40dd-9bb8-c05cb35b3519\") " Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.147041 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2064d31-adb6-40dd-9bb8-c05cb35b3519-catalog-content\") pod \"d2064d31-adb6-40dd-9bb8-c05cb35b3519\" (UID: \"d2064d31-adb6-40dd-9bb8-c05cb35b3519\") " Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.147144 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w794b\" (UniqueName: \"kubernetes.io/projected/d2064d31-adb6-40dd-9bb8-c05cb35b3519-kube-api-access-w794b\") pod \"d2064d31-adb6-40dd-9bb8-c05cb35b3519\" (UID: \"d2064d31-adb6-40dd-9bb8-c05cb35b3519\") " Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.147823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2064d31-adb6-40dd-9bb8-c05cb35b3519-utilities" (OuterVolumeSpecName: "utilities") pod "d2064d31-adb6-40dd-9bb8-c05cb35b3519" (UID: "d2064d31-adb6-40dd-9bb8-c05cb35b3519"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.152815 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2064d31-adb6-40dd-9bb8-c05cb35b3519-kube-api-access-w794b" (OuterVolumeSpecName: "kube-api-access-w794b") pod "d2064d31-adb6-40dd-9bb8-c05cb35b3519" (UID: "d2064d31-adb6-40dd-9bb8-c05cb35b3519"). InnerVolumeSpecName "kube-api-access-w794b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.216255 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xv9d9" event={"ID":"1b6f6e7c-5c2a-46d0-87df-730e291ea02b","Type":"ContainerStarted","Data":"bb7dcf0c26b70aefe4d8a95f00e11a2323d077abb163c18a8bd01c83ecc26417"} Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.220439 4869 generic.go:334] "Generic (PLEG): container finished" podID="d2064d31-adb6-40dd-9bb8-c05cb35b3519" containerID="e8f86c4616646554c52b7e073d6ec6e67f5385932c083afbd6f17249e8892242" exitCode=0 Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.220471 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57j2k" event={"ID":"d2064d31-adb6-40dd-9bb8-c05cb35b3519","Type":"ContainerDied","Data":"e8f86c4616646554c52b7e073d6ec6e67f5385932c083afbd6f17249e8892242"} Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.220489 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-57j2k" event={"ID":"d2064d31-adb6-40dd-9bb8-c05cb35b3519","Type":"ContainerDied","Data":"60308b6ed926da0a9987ed91bfb116717b4113db5c271baefff9e849c257cedb"} Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.220505 4869 scope.go:117] "RemoveContainer" containerID="e8f86c4616646554c52b7e073d6ec6e67f5385932c083afbd6f17249e8892242" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.220595 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-57j2k" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.239113 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xv9d9" podStartSLOduration=2.646719044 podStartE2EDuration="5.239095886s" podCreationTimestamp="2026-01-27 10:09:36 +0000 UTC" firstStartedPulling="2026-01-27 10:09:38.147904114 +0000 UTC m=+946.768328197" lastFinishedPulling="2026-01-27 10:09:40.740280956 +0000 UTC m=+949.360705039" observedRunningTime="2026-01-27 10:09:41.231788995 +0000 UTC m=+949.852213078" watchObservedRunningTime="2026-01-27 10:09:41.239095886 +0000 UTC m=+949.859519969" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.248713 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w794b\" (UniqueName: \"kubernetes.io/projected/d2064d31-adb6-40dd-9bb8-c05cb35b3519-kube-api-access-w794b\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.248743 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2064d31-adb6-40dd-9bb8-c05cb35b3519-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.257970 4869 scope.go:117] "RemoveContainer" containerID="5ff0ed6930fdbab4da8ae83b8076bfb8f8c51a2dda55aba05691cc2adf979c40" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.282214 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2064d31-adb6-40dd-9bb8-c05cb35b3519-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2064d31-adb6-40dd-9bb8-c05cb35b3519" (UID: "d2064d31-adb6-40dd-9bb8-c05cb35b3519"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.289071 4869 scope.go:117] "RemoveContainer" containerID="04555c959f7029caa84ccdeb3f7c4c8db71ffe197803916457ba7802492a6559" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.315801 4869 scope.go:117] "RemoveContainer" containerID="e8f86c4616646554c52b7e073d6ec6e67f5385932c083afbd6f17249e8892242" Jan 27 10:09:41 crc kubenswrapper[4869]: E0127 10:09:41.316421 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8f86c4616646554c52b7e073d6ec6e67f5385932c083afbd6f17249e8892242\": container with ID starting with e8f86c4616646554c52b7e073d6ec6e67f5385932c083afbd6f17249e8892242 not found: ID does not exist" containerID="e8f86c4616646554c52b7e073d6ec6e67f5385932c083afbd6f17249e8892242" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.316452 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f86c4616646554c52b7e073d6ec6e67f5385932c083afbd6f17249e8892242"} err="failed to get container status \"e8f86c4616646554c52b7e073d6ec6e67f5385932c083afbd6f17249e8892242\": rpc error: code = NotFound desc = could not find container \"e8f86c4616646554c52b7e073d6ec6e67f5385932c083afbd6f17249e8892242\": container with ID starting with e8f86c4616646554c52b7e073d6ec6e67f5385932c083afbd6f17249e8892242 not found: ID does not exist" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.316471 4869 scope.go:117] "RemoveContainer" containerID="5ff0ed6930fdbab4da8ae83b8076bfb8f8c51a2dda55aba05691cc2adf979c40" Jan 27 10:09:41 crc kubenswrapper[4869]: E0127 10:09:41.316818 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ff0ed6930fdbab4da8ae83b8076bfb8f8c51a2dda55aba05691cc2adf979c40\": container with ID starting with 5ff0ed6930fdbab4da8ae83b8076bfb8f8c51a2dda55aba05691cc2adf979c40 not found: ID does not exist" containerID="5ff0ed6930fdbab4da8ae83b8076bfb8f8c51a2dda55aba05691cc2adf979c40" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.316844 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ff0ed6930fdbab4da8ae83b8076bfb8f8c51a2dda55aba05691cc2adf979c40"} err="failed to get container status \"5ff0ed6930fdbab4da8ae83b8076bfb8f8c51a2dda55aba05691cc2adf979c40\": rpc error: code = NotFound desc = could not find container \"5ff0ed6930fdbab4da8ae83b8076bfb8f8c51a2dda55aba05691cc2adf979c40\": container with ID starting with 5ff0ed6930fdbab4da8ae83b8076bfb8f8c51a2dda55aba05691cc2adf979c40 not found: ID does not exist" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.316856 4869 scope.go:117] "RemoveContainer" containerID="04555c959f7029caa84ccdeb3f7c4c8db71ffe197803916457ba7802492a6559" Jan 27 10:09:41 crc kubenswrapper[4869]: E0127 10:09:41.317615 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04555c959f7029caa84ccdeb3f7c4c8db71ffe197803916457ba7802492a6559\": container with ID starting with 04555c959f7029caa84ccdeb3f7c4c8db71ffe197803916457ba7802492a6559 not found: ID does not exist" containerID="04555c959f7029caa84ccdeb3f7c4c8db71ffe197803916457ba7802492a6559" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.317636 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04555c959f7029caa84ccdeb3f7c4c8db71ffe197803916457ba7802492a6559"} err="failed to get container status \"04555c959f7029caa84ccdeb3f7c4c8db71ffe197803916457ba7802492a6559\": rpc error: code = NotFound desc = could not find container \"04555c959f7029caa84ccdeb3f7c4c8db71ffe197803916457ba7802492a6559\": container with ID starting with 04555c959f7029caa84ccdeb3f7c4c8db71ffe197803916457ba7802492a6559 not found: ID does not exist" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.350570 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2064d31-adb6-40dd-9bb8-c05cb35b3519-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.591309 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-57j2k"] Jan 27 10:09:41 crc kubenswrapper[4869]: I0127 10:09:41.601145 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-57j2k"] Jan 27 10:09:42 crc kubenswrapper[4869]: I0127 10:09:42.042764 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2064d31-adb6-40dd-9bb8-c05cb35b3519" path="/var/lib/kubelet/pods/d2064d31-adb6-40dd-9bb8-c05cb35b3519/volumes" Jan 27 10:09:42 crc kubenswrapper[4869]: I0127 10:09:42.238363 4869 generic.go:334] "Generic (PLEG): container finished" podID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" containerID="a63333c886f71c4589bfb134abdcbf86ceea0ce23a1b3cc0ee0a818c84c74df8" exitCode=0 Jan 27 10:09:42 crc kubenswrapper[4869]: I0127 10:09:42.238433 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerDied","Data":"a63333c886f71c4589bfb134abdcbf86ceea0ce23a1b3cc0ee0a818c84c74df8"} Jan 27 10:09:42 crc kubenswrapper[4869]: I0127 10:09:42.238472 4869 scope.go:117] "RemoveContainer" containerID="6911e661c2c8d1d0a4b4a6efb4a3ff172d341a80fc003e751e61dfc48fa304d8" Jan 27 10:09:42 crc kubenswrapper[4869]: I0127 10:09:42.239024 4869 scope.go:117] "RemoveContainer" containerID="a63333c886f71c4589bfb134abdcbf86ceea0ce23a1b3cc0ee0a818c84c74df8" Jan 27 10:09:42 crc kubenswrapper[4869]: E0127 10:09:42.239298 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 10s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:09:42 crc kubenswrapper[4869]: I0127 10:09:42.780616 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hcxqz" Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.512717 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qf659" podUID="e545b253-d74a-43e1-9a14-990ea5784f16" containerName="ovn-controller" probeResult="failure" output=< Jan 27 10:09:43 crc kubenswrapper[4869]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 10:09:43 crc kubenswrapper[4869]: > Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.558351 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.572126 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jd977" Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.805624 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qf659-config-plps2"] Jan 27 10:09:43 crc kubenswrapper[4869]: E0127 10:09:43.805999 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b4f7c0-c984-455c-a928-9ebf3243bfe8" containerName="mariadb-account-create-update" Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.806014 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b4f7c0-c984-455c-a928-9ebf3243bfe8" containerName="mariadb-account-create-update" Jan 27 10:09:43 crc kubenswrapper[4869]: E0127 10:09:43.806034 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2064d31-adb6-40dd-9bb8-c05cb35b3519" containerName="extract-utilities" Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.806042 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2064d31-adb6-40dd-9bb8-c05cb35b3519" containerName="extract-utilities" Jan 27 10:09:43 crc kubenswrapper[4869]: E0127 10:09:43.806055 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2064d31-adb6-40dd-9bb8-c05cb35b3519" containerName="registry-server" Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.806062 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2064d31-adb6-40dd-9bb8-c05cb35b3519" containerName="registry-server" Jan 27 10:09:43 crc kubenswrapper[4869]: E0127 10:09:43.806071 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2064d31-adb6-40dd-9bb8-c05cb35b3519" containerName="extract-content" Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.806077 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2064d31-adb6-40dd-9bb8-c05cb35b3519" containerName="extract-content" Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.806235 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2064d31-adb6-40dd-9bb8-c05cb35b3519" containerName="registry-server" Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.806247 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b4f7c0-c984-455c-a928-9ebf3243bfe8" containerName="mariadb-account-create-update" Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.806743 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.809230 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.811694 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qf659-config-plps2"] Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.902708 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-run\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.902811 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f30fbb69-9cbc-4a99-b25e-1cf09396382f-scripts\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.902864 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-log-ovn\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.902921 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-run-ovn\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.902945 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f30fbb69-9cbc-4a99-b25e-1cf09396382f-additional-scripts\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:43 crc kubenswrapper[4869]: I0127 10:09:43.903033 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml2sv\" (UniqueName: \"kubernetes.io/projected/f30fbb69-9cbc-4a99-b25e-1cf09396382f-kube-api-access-ml2sv\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:44 crc kubenswrapper[4869]: I0127 10:09:44.004523 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f30fbb69-9cbc-4a99-b25e-1cf09396382f-scripts\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:44 crc kubenswrapper[4869]: I0127 10:09:44.004566 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-log-ovn\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:44 crc kubenswrapper[4869]: I0127 10:09:44.004593 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-run-ovn\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:44 crc kubenswrapper[4869]: I0127 10:09:44.004616 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f30fbb69-9cbc-4a99-b25e-1cf09396382f-additional-scripts\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:44 crc kubenswrapper[4869]: I0127 10:09:44.004672 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml2sv\" (UniqueName: \"kubernetes.io/projected/f30fbb69-9cbc-4a99-b25e-1cf09396382f-kube-api-access-ml2sv\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:44 crc kubenswrapper[4869]: I0127 10:09:44.004724 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-run\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:44 crc kubenswrapper[4869]: I0127 10:09:44.004931 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-log-ovn\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:44 crc kubenswrapper[4869]: I0127 10:09:44.004947 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-run-ovn\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:44 crc kubenswrapper[4869]: I0127 10:09:44.005018 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-run\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:44 crc kubenswrapper[4869]: I0127 10:09:44.005539 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f30fbb69-9cbc-4a99-b25e-1cf09396382f-additional-scripts\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:44 crc kubenswrapper[4869]: I0127 10:09:44.008267 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f30fbb69-9cbc-4a99-b25e-1cf09396382f-scripts\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:44 crc kubenswrapper[4869]: I0127 10:09:44.022394 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml2sv\" (UniqueName: \"kubernetes.io/projected/f30fbb69-9cbc-4a99-b25e-1cf09396382f-kube-api-access-ml2sv\") pod \"ovn-controller-qf659-config-plps2\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:44 crc kubenswrapper[4869]: I0127 10:09:44.146482 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:44 crc kubenswrapper[4869]: I0127 10:09:44.209521 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-llvbz" Jan 27 10:09:44 crc kubenswrapper[4869]: I0127 10:09:44.261103 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-llvbz" Jan 27 10:09:45 crc kubenswrapper[4869]: I0127 10:09:45.697561 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:09:45 crc kubenswrapper[4869]: I0127 10:09:45.697895 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:09:45 crc kubenswrapper[4869]: I0127 10:09:45.697941 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 10:09:45 crc kubenswrapper[4869]: I0127 10:09:45.698430 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc8a6d1fdbc6b3f8427a05417ce1783a27aac64b6b76b4051c7a781e964cbb0b"} pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:09:45 crc kubenswrapper[4869]: I0127 10:09:45.698488 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" containerID="cri-o://dc8a6d1fdbc6b3f8427a05417ce1783a27aac64b6b76b4051c7a781e964cbb0b" gracePeriod=600 Jan 27 10:09:46 crc kubenswrapper[4869]: I0127 10:09:46.302727 4869 generic.go:334] "Generic (PLEG): container finished" podID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerID="dc8a6d1fdbc6b3f8427a05417ce1783a27aac64b6b76b4051c7a781e964cbb0b" exitCode=0 Jan 27 10:09:46 crc kubenswrapper[4869]: I0127 10:09:46.302770 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerDied","Data":"dc8a6d1fdbc6b3f8427a05417ce1783a27aac64b6b76b4051c7a781e964cbb0b"} Jan 27 10:09:46 crc kubenswrapper[4869]: I0127 10:09:46.541035 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xv9d9" Jan 27 10:09:46 crc kubenswrapper[4869]: I0127 10:09:46.541184 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xv9d9" Jan 27 10:09:46 crc kubenswrapper[4869]: I0127 10:09:46.596646 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xv9d9" Jan 27 10:09:46 crc kubenswrapper[4869]: I0127 10:09:46.797466 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hcxqz"] Jan 27 10:09:46 crc kubenswrapper[4869]: I0127 10:09:46.797688 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hcxqz" podUID="a8954ce1-4dee-4849-b0a5-26461590a6a0" containerName="registry-server" containerID="cri-o://f2727c16811354346be1819caa54cb569da9a2b2aadd83892d3a19ca01029d2a" gracePeriod=2 Jan 27 10:09:47 crc kubenswrapper[4869]: I0127 10:09:47.001498 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-llvbz"] Jan 27 10:09:47 crc kubenswrapper[4869]: I0127 10:09:47.001746 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-llvbz" podUID="62f42f44-03d4-435c-a230-78a0252fd732" containerName="registry-server" containerID="cri-o://0aeac4901fde5c2ae190f60a0869a6c261ffecac06b384bce8bf4a1226858ee5" gracePeriod=2 Jan 27 10:09:47 crc kubenswrapper[4869]: I0127 10:09:47.312582 4869 generic.go:334] "Generic (PLEG): container finished" podID="a8954ce1-4dee-4849-b0a5-26461590a6a0" containerID="f2727c16811354346be1819caa54cb569da9a2b2aadd83892d3a19ca01029d2a" exitCode=0 Jan 27 10:09:47 crc kubenswrapper[4869]: I0127 10:09:47.312641 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcxqz" event={"ID":"a8954ce1-4dee-4849-b0a5-26461590a6a0","Type":"ContainerDied","Data":"f2727c16811354346be1819caa54cb569da9a2b2aadd83892d3a19ca01029d2a"} Jan 27 10:09:47 crc kubenswrapper[4869]: I0127 10:09:47.314356 4869 generic.go:334] "Generic (PLEG): container finished" podID="62f42f44-03d4-435c-a230-78a0252fd732" containerID="0aeac4901fde5c2ae190f60a0869a6c261ffecac06b384bce8bf4a1226858ee5" exitCode=0 Jan 27 10:09:47 crc kubenswrapper[4869]: I0127 10:09:47.314408 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvbz" event={"ID":"62f42f44-03d4-435c-a230-78a0252fd732","Type":"ContainerDied","Data":"0aeac4901fde5c2ae190f60a0869a6c261ffecac06b384bce8bf4a1226858ee5"} Jan 27 10:09:47 crc kubenswrapper[4869]: I0127 10:09:47.353299 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xv9d9" Jan 27 10:09:47 crc kubenswrapper[4869]: I0127 10:09:47.811924 4869 scope.go:117] "RemoveContainer" containerID="4a99f8d4039d41e36670df28e70519808f43f55b1ba2158821f11696774fdec4" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.000437 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hcxqz" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.061306 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llvbz" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.069168 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8954ce1-4dee-4849-b0a5-26461590a6a0-utilities\") pod \"a8954ce1-4dee-4849-b0a5-26461590a6a0\" (UID: \"a8954ce1-4dee-4849-b0a5-26461590a6a0\") " Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.069232 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8sng\" (UniqueName: \"kubernetes.io/projected/a8954ce1-4dee-4849-b0a5-26461590a6a0-kube-api-access-g8sng\") pod \"a8954ce1-4dee-4849-b0a5-26461590a6a0\" (UID: \"a8954ce1-4dee-4849-b0a5-26461590a6a0\") " Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.069347 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8954ce1-4dee-4849-b0a5-26461590a6a0-catalog-content\") pod \"a8954ce1-4dee-4849-b0a5-26461590a6a0\" (UID: \"a8954ce1-4dee-4849-b0a5-26461590a6a0\") " Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.070063 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8954ce1-4dee-4849-b0a5-26461590a6a0-utilities" (OuterVolumeSpecName: "utilities") pod "a8954ce1-4dee-4849-b0a5-26461590a6a0" (UID: "a8954ce1-4dee-4849-b0a5-26461590a6a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.073210 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8954ce1-4dee-4849-b0a5-26461590a6a0-kube-api-access-g8sng" (OuterVolumeSpecName: "kube-api-access-g8sng") pod "a8954ce1-4dee-4849-b0a5-26461590a6a0" (UID: "a8954ce1-4dee-4849-b0a5-26461590a6a0"). InnerVolumeSpecName "kube-api-access-g8sng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.130754 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8954ce1-4dee-4849-b0a5-26461590a6a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8954ce1-4dee-4849-b0a5-26461590a6a0" (UID: "a8954ce1-4dee-4849-b0a5-26461590a6a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.171620 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62f42f44-03d4-435c-a230-78a0252fd732-utilities\") pod \"62f42f44-03d4-435c-a230-78a0252fd732\" (UID: \"62f42f44-03d4-435c-a230-78a0252fd732\") " Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.171683 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6hjh\" (UniqueName: \"kubernetes.io/projected/62f42f44-03d4-435c-a230-78a0252fd732-kube-api-access-d6hjh\") pod \"62f42f44-03d4-435c-a230-78a0252fd732\" (UID: \"62f42f44-03d4-435c-a230-78a0252fd732\") " Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.171970 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62f42f44-03d4-435c-a230-78a0252fd732-catalog-content\") pod \"62f42f44-03d4-435c-a230-78a0252fd732\" (UID: \"62f42f44-03d4-435c-a230-78a0252fd732\") " Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.172254 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.172345 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8954ce1-4dee-4849-b0a5-26461590a6a0-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.172363 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8sng\" (UniqueName: \"kubernetes.io/projected/a8954ce1-4dee-4849-b0a5-26461590a6a0-kube-api-access-g8sng\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.172377 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8954ce1-4dee-4849-b0a5-26461590a6a0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.174749 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62f42f44-03d4-435c-a230-78a0252fd732-kube-api-access-d6hjh" (OuterVolumeSpecName: "kube-api-access-d6hjh") pod "62f42f44-03d4-435c-a230-78a0252fd732" (UID: "62f42f44-03d4-435c-a230-78a0252fd732"). InnerVolumeSpecName "kube-api-access-d6hjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.175605 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62f42f44-03d4-435c-a230-78a0252fd732-utilities" (OuterVolumeSpecName: "utilities") pod "62f42f44-03d4-435c-a230-78a0252fd732" (UID: "62f42f44-03d4-435c-a230-78a0252fd732"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.177363 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0429a74c-af6a-45f1-9ca2-b66dcd47ca38-etc-swift\") pod \"swift-storage-0\" (UID: \"0429a74c-af6a-45f1-9ca2-b66dcd47ca38\") " pod="openstack/swift-storage-0" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.221661 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62f42f44-03d4-435c-a230-78a0252fd732-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62f42f44-03d4-435c-a230-78a0252fd732" (UID: "62f42f44-03d4-435c-a230-78a0252fd732"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.226429 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qf659-config-plps2"] Jan 27 10:09:48 crc kubenswrapper[4869]: W0127 10:09:48.231243 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf30fbb69_9cbc_4a99_b25e_1cf09396382f.slice/crio-cb1ad478571e295466970b3bd529d01394221c339c65b65b2ad59e778ab6fd2e WatchSource:0}: Error finding container cb1ad478571e295466970b3bd529d01394221c339c65b65b2ad59e778ab6fd2e: Status 404 returned error can't find the container with id cb1ad478571e295466970b3bd529d01394221c339c65b65b2ad59e778ab6fd2e Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.256741 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.274231 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62f42f44-03d4-435c-a230-78a0252fd732-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.274264 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62f42f44-03d4-435c-a230-78a0252fd732-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.274282 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6hjh\" (UniqueName: \"kubernetes.io/projected/62f42f44-03d4-435c-a230-78a0252fd732-kube-api-access-d6hjh\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.333905 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvbz" event={"ID":"62f42f44-03d4-435c-a230-78a0252fd732","Type":"ContainerDied","Data":"009cdbfcfd4d68627415c7a7e1e7a368840a4594100cb8c6bc531b5e62a24eb7"} Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.334148 4869 scope.go:117] "RemoveContainer" containerID="0aeac4901fde5c2ae190f60a0869a6c261ffecac06b384bce8bf4a1226858ee5" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.334311 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llvbz" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.359864 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qf659-config-plps2" event={"ID":"f30fbb69-9cbc-4a99-b25e-1cf09396382f","Type":"ContainerStarted","Data":"cb1ad478571e295466970b3bd529d01394221c339c65b65b2ad59e778ab6fd2e"} Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.374751 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b9v6v" event={"ID":"97d81ee5-695a-463d-8e02-30d6abcc13c3","Type":"ContainerStarted","Data":"6bf82af922b85d626cea63b3634be750c808560cba053f73ffeec66c8e6f02dd"} Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.393217 4869 scope.go:117] "RemoveContainer" containerID="6ee36ec1584115bfec701a36c983d1ff6c9d2e4df6ed355b44ca9dd94c7973bc" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.393362 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcxqz" event={"ID":"a8954ce1-4dee-4849-b0a5-26461590a6a0","Type":"ContainerDied","Data":"44edcedd4567d919c51fc596bce98478cbda7143df322613c1504e4eed5a4971"} Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.393427 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hcxqz" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.394368 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-llvbz"] Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.402203 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerStarted","Data":"1b72347200950347d222694240cf88dda5067f82f3f49e7890c07c595718e823"} Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.402553 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-llvbz"] Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.404405 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-b9v6v" podStartSLOduration=2.280007357 podStartE2EDuration="15.404388853s" podCreationTimestamp="2026-01-27 10:09:33 +0000 UTC" firstStartedPulling="2026-01-27 10:09:34.567632316 +0000 UTC m=+943.188056399" lastFinishedPulling="2026-01-27 10:09:47.692013812 +0000 UTC m=+956.312437895" observedRunningTime="2026-01-27 10:09:48.393340955 +0000 UTC m=+957.013765048" watchObservedRunningTime="2026-01-27 10:09:48.404388853 +0000 UTC m=+957.024812936" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.433712 4869 scope.go:117] "RemoveContainer" containerID="4517bd6537c7e7ad10f47a1f37b1c5368f2c25ac83a23c3e886b2736906f8c7f" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.451778 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hcxqz"] Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.458439 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hcxqz"] Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.476213 4869 scope.go:117] "RemoveContainer" containerID="f2727c16811354346be1819caa54cb569da9a2b2aadd83892d3a19ca01029d2a" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.495138 4869 scope.go:117] "RemoveContainer" containerID="848f8dd663af11f4867b80c8b4d46b3564a5100e3ef1b4b94f4f931655642b61" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.524032 4869 scope.go:117] "RemoveContainer" containerID="ddc4f37d884745e486f6c5ca1ed8bee53e9e70a9fc217c4406083ef97960298a" Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.527238 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qf659" podUID="e545b253-d74a-43e1-9a14-990ea5784f16" containerName="ovn-controller" probeResult="failure" output=< Jan 27 10:09:48 crc kubenswrapper[4869]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 10:09:48 crc kubenswrapper[4869]: > Jan 27 10:09:48 crc kubenswrapper[4869]: W0127 10:09:48.765409 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0429a74c_af6a_45f1_9ca2_b66dcd47ca38.slice/crio-0029da77aedb39cb9291521ad245a3e8ea20c4ee5af5ee63216f17d08b1afd59 WatchSource:0}: Error finding container 0029da77aedb39cb9291521ad245a3e8ea20c4ee5af5ee63216f17d08b1afd59: Status 404 returned error can't find the container with id 0029da77aedb39cb9291521ad245a3e8ea20c4ee5af5ee63216f17d08b1afd59 Jan 27 10:09:48 crc kubenswrapper[4869]: I0127 10:09:48.771248 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 10:09:49 crc kubenswrapper[4869]: I0127 10:09:49.207904 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xv9d9"] Jan 27 10:09:49 crc kubenswrapper[4869]: I0127 10:09:49.417085 4869 generic.go:334] "Generic (PLEG): container finished" podID="f30fbb69-9cbc-4a99-b25e-1cf09396382f" containerID="5acb071bf278e773630a67ba0b747812bd3d2f99ab4e4975c643e04f9c9da920" exitCode=0 Jan 27 10:09:49 crc kubenswrapper[4869]: I0127 10:09:49.417270 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qf659-config-plps2" event={"ID":"f30fbb69-9cbc-4a99-b25e-1cf09396382f","Type":"ContainerDied","Data":"5acb071bf278e773630a67ba0b747812bd3d2f99ab4e4975c643e04f9c9da920"} Jan 27 10:09:49 crc kubenswrapper[4869]: I0127 10:09:49.443729 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0429a74c-af6a-45f1-9ca2-b66dcd47ca38","Type":"ContainerStarted","Data":"0029da77aedb39cb9291521ad245a3e8ea20c4ee5af5ee63216f17d08b1afd59"} Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.041909 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62f42f44-03d4-435c-a230-78a0252fd732" path="/var/lib/kubelet/pods/62f42f44-03d4-435c-a230-78a0252fd732/volumes" Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.042535 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8954ce1-4dee-4849-b0a5-26461590a6a0" path="/var/lib/kubelet/pods/a8954ce1-4dee-4849-b0a5-26461590a6a0/volumes" Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.453368 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0429a74c-af6a-45f1-9ca2-b66dcd47ca38","Type":"ContainerStarted","Data":"bd5f41ef1f8b5d5f7da9d1efbecacb4b440701ef6bfa18e174581ed5b1e74111"} Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.453411 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0429a74c-af6a-45f1-9ca2-b66dcd47ca38","Type":"ContainerStarted","Data":"c81970729ae332f811ab58c3eb0afe7410bb00863bbfd2f0f2bac22a195e7b44"} Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.453424 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0429a74c-af6a-45f1-9ca2-b66dcd47ca38","Type":"ContainerStarted","Data":"ff2773b712f7c76254317a1ca5344efe6e5c297ba404059cac8f05f176b857cf"} Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.453433 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0429a74c-af6a-45f1-9ca2-b66dcd47ca38","Type":"ContainerStarted","Data":"081c370ffa0ad97eaf5b18715c0a7e4da9e990a053d6d8a394b3711385812507"} Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.453538 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xv9d9" podUID="1b6f6e7c-5c2a-46d0-87df-730e291ea02b" containerName="registry-server" containerID="cri-o://bb7dcf0c26b70aefe4d8a95f00e11a2323d077abb163c18a8bd01c83ecc26417" gracePeriod=2 Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.814972 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.915974 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f30fbb69-9cbc-4a99-b25e-1cf09396382f-additional-scripts\") pod \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.916032 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-run\") pod \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.916068 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-log-ovn\") pod \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.916159 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-run-ovn\") pod \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.916162 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-run" (OuterVolumeSpecName: "var-run") pod "f30fbb69-9cbc-4a99-b25e-1cf09396382f" (UID: "f30fbb69-9cbc-4a99-b25e-1cf09396382f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.916213 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml2sv\" (UniqueName: \"kubernetes.io/projected/f30fbb69-9cbc-4a99-b25e-1cf09396382f-kube-api-access-ml2sv\") pod \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.916226 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f30fbb69-9cbc-4a99-b25e-1cf09396382f" (UID: "f30fbb69-9cbc-4a99-b25e-1cf09396382f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.916245 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f30fbb69-9cbc-4a99-b25e-1cf09396382f" (UID: "f30fbb69-9cbc-4a99-b25e-1cf09396382f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.916309 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f30fbb69-9cbc-4a99-b25e-1cf09396382f-scripts\") pod \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\" (UID: \"f30fbb69-9cbc-4a99-b25e-1cf09396382f\") " Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.916672 4869 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.916687 4869 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.916703 4869 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f30fbb69-9cbc-4a99-b25e-1cf09396382f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.917458 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f30fbb69-9cbc-4a99-b25e-1cf09396382f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f30fbb69-9cbc-4a99-b25e-1cf09396382f" (UID: "f30fbb69-9cbc-4a99-b25e-1cf09396382f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.917823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f30fbb69-9cbc-4a99-b25e-1cf09396382f-scripts" (OuterVolumeSpecName: "scripts") pod "f30fbb69-9cbc-4a99-b25e-1cf09396382f" (UID: "f30fbb69-9cbc-4a99-b25e-1cf09396382f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:50 crc kubenswrapper[4869]: I0127 10:09:50.922199 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f30fbb69-9cbc-4a99-b25e-1cf09396382f-kube-api-access-ml2sv" (OuterVolumeSpecName: "kube-api-access-ml2sv") pod "f30fbb69-9cbc-4a99-b25e-1cf09396382f" (UID: "f30fbb69-9cbc-4a99-b25e-1cf09396382f"). InnerVolumeSpecName "kube-api-access-ml2sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:51 crc kubenswrapper[4869]: I0127 10:09:51.018605 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml2sv\" (UniqueName: \"kubernetes.io/projected/f30fbb69-9cbc-4a99-b25e-1cf09396382f-kube-api-access-ml2sv\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:51 crc kubenswrapper[4869]: I0127 10:09:51.018634 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f30fbb69-9cbc-4a99-b25e-1cf09396382f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:51 crc kubenswrapper[4869]: I0127 10:09:51.018644 4869 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f30fbb69-9cbc-4a99-b25e-1cf09396382f-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:51 crc kubenswrapper[4869]: I0127 10:09:51.470248 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qf659-config-plps2" event={"ID":"f30fbb69-9cbc-4a99-b25e-1cf09396382f","Type":"ContainerDied","Data":"cb1ad478571e295466970b3bd529d01394221c339c65b65b2ad59e778ab6fd2e"} Jan 27 10:09:51 crc kubenswrapper[4869]: I0127 10:09:51.470584 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb1ad478571e295466970b3bd529d01394221c339c65b65b2ad59e778ab6fd2e" Jan 27 10:09:51 crc kubenswrapper[4869]: I0127 10:09:51.470300 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qf659-config-plps2" Jan 27 10:09:51 crc kubenswrapper[4869]: I0127 10:09:51.474999 4869 generic.go:334] "Generic (PLEG): container finished" podID="1b6f6e7c-5c2a-46d0-87df-730e291ea02b" containerID="bb7dcf0c26b70aefe4d8a95f00e11a2323d077abb163c18a8bd01c83ecc26417" exitCode=0 Jan 27 10:09:51 crc kubenswrapper[4869]: I0127 10:09:51.475085 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xv9d9" event={"ID":"1b6f6e7c-5c2a-46d0-87df-730e291ea02b","Type":"ContainerDied","Data":"bb7dcf0c26b70aefe4d8a95f00e11a2323d077abb163c18a8bd01c83ecc26417"} Jan 27 10:09:51 crc kubenswrapper[4869]: I0127 10:09:51.938960 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qf659-config-plps2"] Jan 27 10:09:51 crc kubenswrapper[4869]: I0127 10:09:51.949886 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qf659-config-plps2"] Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.084234 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f30fbb69-9cbc-4a99-b25e-1cf09396382f" path="/var/lib/kubelet/pods/f30fbb69-9cbc-4a99-b25e-1cf09396382f/volumes" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.084865 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qf659-config-q8z2f"] Jan 27 10:09:52 crc kubenswrapper[4869]: E0127 10:09:52.085131 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8954ce1-4dee-4849-b0a5-26461590a6a0" containerName="registry-server" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.085147 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8954ce1-4dee-4849-b0a5-26461590a6a0" containerName="registry-server" Jan 27 10:09:52 crc kubenswrapper[4869]: E0127 10:09:52.085160 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f30fbb69-9cbc-4a99-b25e-1cf09396382f" containerName="ovn-config" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.085166 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f30fbb69-9cbc-4a99-b25e-1cf09396382f" containerName="ovn-config" Jan 27 10:09:52 crc kubenswrapper[4869]: E0127 10:09:52.085181 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8954ce1-4dee-4849-b0a5-26461590a6a0" containerName="extract-utilities" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.085187 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8954ce1-4dee-4849-b0a5-26461590a6a0" containerName="extract-utilities" Jan 27 10:09:52 crc kubenswrapper[4869]: E0127 10:09:52.085200 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f42f44-03d4-435c-a230-78a0252fd732" containerName="extract-utilities" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.085206 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f42f44-03d4-435c-a230-78a0252fd732" containerName="extract-utilities" Jan 27 10:09:52 crc kubenswrapper[4869]: E0127 10:09:52.085220 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8954ce1-4dee-4849-b0a5-26461590a6a0" containerName="extract-content" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.085225 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8954ce1-4dee-4849-b0a5-26461590a6a0" containerName="extract-content" Jan 27 10:09:52 crc kubenswrapper[4869]: E0127 10:09:52.085243 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f42f44-03d4-435c-a230-78a0252fd732" containerName="registry-server" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.085248 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f42f44-03d4-435c-a230-78a0252fd732" containerName="registry-server" Jan 27 10:09:52 crc kubenswrapper[4869]: E0127 10:09:52.085259 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f42f44-03d4-435c-a230-78a0252fd732" containerName="extract-content" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.085265 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f42f44-03d4-435c-a230-78a0252fd732" containerName="extract-content" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.085400 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8954ce1-4dee-4849-b0a5-26461590a6a0" containerName="registry-server" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.085414 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f30fbb69-9cbc-4a99-b25e-1cf09396382f" containerName="ovn-config" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.085429 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f42f44-03d4-435c-a230-78a0252fd732" containerName="registry-server" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.085907 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.094085 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.097126 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qf659-config-q8z2f"] Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.136909 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1674fa94-4e99-432d-b701-2d7f94fb323d-additional-scripts\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.136979 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mn6b\" (UniqueName: \"kubernetes.io/projected/1674fa94-4e99-432d-b701-2d7f94fb323d-kube-api-access-5mn6b\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.137001 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1674fa94-4e99-432d-b701-2d7f94fb323d-scripts\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.137026 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-log-ovn\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.137079 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-run-ovn\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.137101 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-run\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.237899 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1674fa94-4e99-432d-b701-2d7f94fb323d-additional-scripts\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.238183 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mn6b\" (UniqueName: \"kubernetes.io/projected/1674fa94-4e99-432d-b701-2d7f94fb323d-kube-api-access-5mn6b\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.238208 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1674fa94-4e99-432d-b701-2d7f94fb323d-scripts\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.238229 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-log-ovn\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.238281 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-run-ovn\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.238302 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-run\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.238506 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-run\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.238807 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1674fa94-4e99-432d-b701-2d7f94fb323d-additional-scripts\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.238895 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-log-ovn\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.238945 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-run-ovn\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.240434 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1674fa94-4e99-432d-b701-2d7f94fb323d-scripts\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.254729 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mn6b\" (UniqueName: \"kubernetes.io/projected/1674fa94-4e99-432d-b701-2d7f94fb323d-kube-api-access-5mn6b\") pod \"ovn-controller-qf659-config-q8z2f\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.312031 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xv9d9" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.421912 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.440645 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-utilities\") pod \"1b6f6e7c-5c2a-46d0-87df-730e291ea02b\" (UID: \"1b6f6e7c-5c2a-46d0-87df-730e291ea02b\") " Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.440741 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8sk9\" (UniqueName: \"kubernetes.io/projected/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-kube-api-access-r8sk9\") pod \"1b6f6e7c-5c2a-46d0-87df-730e291ea02b\" (UID: \"1b6f6e7c-5c2a-46d0-87df-730e291ea02b\") " Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.440770 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-catalog-content\") pod \"1b6f6e7c-5c2a-46d0-87df-730e291ea02b\" (UID: \"1b6f6e7c-5c2a-46d0-87df-730e291ea02b\") " Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.441513 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-utilities" (OuterVolumeSpecName: "utilities") pod "1b6f6e7c-5c2a-46d0-87df-730e291ea02b" (UID: "1b6f6e7c-5c2a-46d0-87df-730e291ea02b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.445531 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-kube-api-access-r8sk9" (OuterVolumeSpecName: "kube-api-access-r8sk9") pod "1b6f6e7c-5c2a-46d0-87df-730e291ea02b" (UID: "1b6f6e7c-5c2a-46d0-87df-730e291ea02b"). InnerVolumeSpecName "kube-api-access-r8sk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.462606 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b6f6e7c-5c2a-46d0-87df-730e291ea02b" (UID: "1b6f6e7c-5c2a-46d0-87df-730e291ea02b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.502918 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xv9d9" event={"ID":"1b6f6e7c-5c2a-46d0-87df-730e291ea02b","Type":"ContainerDied","Data":"00072f8de3145ed8c00598b1f7863910a3c868e3606e9a14e4cd536d7d949fbd"} Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.503842 4869 scope.go:117] "RemoveContainer" containerID="bb7dcf0c26b70aefe4d8a95f00e11a2323d077abb163c18a8bd01c83ecc26417" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.502947 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xv9d9" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.538993 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xv9d9"] Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.542649 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.542675 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.542685 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8sk9\" (UniqueName: \"kubernetes.io/projected/1b6f6e7c-5c2a-46d0-87df-730e291ea02b-kube-api-access-r8sk9\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.556294 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xv9d9"] Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.761255 4869 scope.go:117] "RemoveContainer" containerID="a01a46c9e0036d745b515f75508560862e3cb19a00c2f6581addb048c50b6141" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.800071 4869 scope.go:117] "RemoveContainer" containerID="a8a588be6f3ec65e93d2ed11cde42e7b0aebeace7adf851dcb386c89657ccf53" Jan 27 10:09:52 crc kubenswrapper[4869]: I0127 10:09:52.890570 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qf659-config-q8z2f"] Jan 27 10:09:52 crc kubenswrapper[4869]: W0127 10:09:52.913567 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1674fa94_4e99_432d_b701_2d7f94fb323d.slice/crio-a9b20a5235508e9f466ba28ed4daa50c41127e4525c6ad9285c8aee84911c457 WatchSource:0}: Error finding container a9b20a5235508e9f466ba28ed4daa50c41127e4525c6ad9285c8aee84911c457: Status 404 returned error can't find the container with id a9b20a5235508e9f466ba28ed4daa50c41127e4525c6ad9285c8aee84911c457 Jan 27 10:09:53 crc kubenswrapper[4869]: I0127 10:09:53.033822 4869 scope.go:117] "RemoveContainer" containerID="713cbe5ddc293222c05cc4d2e1f4a343d1d98adff94eae20f18043cdb0dd6332" Jan 27 10:09:53 crc kubenswrapper[4869]: I0127 10:09:53.034226 4869 scope.go:117] "RemoveContainer" containerID="a63333c886f71c4589bfb134abdcbf86ceea0ce23a1b3cc0ee0a818c84c74df8" Jan 27 10:09:53 crc kubenswrapper[4869]: I0127 10:09:53.511031 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerStarted","Data":"2c871af1399c1a18f5848ed5794243ac637b38c0d4335182714036f73dd2458c"} Jan 27 10:09:53 crc kubenswrapper[4869]: I0127 10:09:53.512343 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 10:09:53 crc kubenswrapper[4869]: I0127 10:09:53.512479 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-qf659" Jan 27 10:09:53 crc kubenswrapper[4869]: I0127 10:09:53.515866 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0429a74c-af6a-45f1-9ca2-b66dcd47ca38","Type":"ContainerStarted","Data":"965eebb7c0d02c09ca0027d6cd44de7d9ccd0ee9a14f48fb73523fb263777235"} Jan 27 10:09:53 crc kubenswrapper[4869]: I0127 10:09:53.515909 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0429a74c-af6a-45f1-9ca2-b66dcd47ca38","Type":"ContainerStarted","Data":"e5e710d2d1f92516ddacd0b09dce4a3ab91cdb1beaa721401d42175cb89fee5f"} Jan 27 10:09:53 crc kubenswrapper[4869]: I0127 10:09:53.515919 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0429a74c-af6a-45f1-9ca2-b66dcd47ca38","Type":"ContainerStarted","Data":"16347742f8c567bc60ffbfc7e91b0eabf4ea6e6fbfca8395cdfc9c6952fb5538"} Jan 27 10:09:53 crc kubenswrapper[4869]: I0127 10:09:53.517675 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerStarted","Data":"1f1aa0bf19a32c776bf2f83a31ed4d652e4843944a42736153fcccba8078a367"} Jan 27 10:09:53 crc kubenswrapper[4869]: I0127 10:09:53.517862 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:09:53 crc kubenswrapper[4869]: I0127 10:09:53.523351 4869 generic.go:334] "Generic (PLEG): container finished" podID="1674fa94-4e99-432d-b701-2d7f94fb323d" containerID="59fe7b1d2f8e7896a7a069dc92029c6baf0a2add719dab0bc745bfd8b386e066" exitCode=0 Jan 27 10:09:53 crc kubenswrapper[4869]: I0127 10:09:53.523395 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qf659-config-q8z2f" event={"ID":"1674fa94-4e99-432d-b701-2d7f94fb323d","Type":"ContainerDied","Data":"59fe7b1d2f8e7896a7a069dc92029c6baf0a2add719dab0bc745bfd8b386e066"} Jan 27 10:09:53 crc kubenswrapper[4869]: I0127 10:09:53.523419 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qf659-config-q8z2f" event={"ID":"1674fa94-4e99-432d-b701-2d7f94fb323d","Type":"ContainerStarted","Data":"a9b20a5235508e9f466ba28ed4daa50c41127e4525c6ad9285c8aee84911c457"} Jan 27 10:09:54 crc kubenswrapper[4869]: I0127 10:09:54.041552 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b6f6e7c-5c2a-46d0-87df-730e291ea02b" path="/var/lib/kubelet/pods/1b6f6e7c-5c2a-46d0-87df-730e291ea02b/volumes" Jan 27 10:09:54 crc kubenswrapper[4869]: I0127 10:09:54.542219 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0429a74c-af6a-45f1-9ca2-b66dcd47ca38","Type":"ContainerStarted","Data":"dc779cc883e30f3ac4d96306fc4247ea9b2f646a15128414a6743e94a72ac701"} Jan 27 10:09:54 crc kubenswrapper[4869]: I0127 10:09:54.897199 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:54 crc kubenswrapper[4869]: I0127 10:09:54.972245 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-run-ovn\") pod \"1674fa94-4e99-432d-b701-2d7f94fb323d\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " Jan 27 10:09:54 crc kubenswrapper[4869]: I0127 10:09:54.972323 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-log-ovn\") pod \"1674fa94-4e99-432d-b701-2d7f94fb323d\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " Jan 27 10:09:54 crc kubenswrapper[4869]: I0127 10:09:54.972395 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mn6b\" (UniqueName: \"kubernetes.io/projected/1674fa94-4e99-432d-b701-2d7f94fb323d-kube-api-access-5mn6b\") pod \"1674fa94-4e99-432d-b701-2d7f94fb323d\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " Jan 27 10:09:54 crc kubenswrapper[4869]: I0127 10:09:54.972428 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1674fa94-4e99-432d-b701-2d7f94fb323d-scripts\") pod \"1674fa94-4e99-432d-b701-2d7f94fb323d\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " Jan 27 10:09:54 crc kubenswrapper[4869]: I0127 10:09:54.972450 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1674fa94-4e99-432d-b701-2d7f94fb323d-additional-scripts\") pod \"1674fa94-4e99-432d-b701-2d7f94fb323d\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " Jan 27 10:09:54 crc kubenswrapper[4869]: I0127 10:09:54.972474 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-run\") pod \"1674fa94-4e99-432d-b701-2d7f94fb323d\" (UID: \"1674fa94-4e99-432d-b701-2d7f94fb323d\") " Jan 27 10:09:54 crc kubenswrapper[4869]: I0127 10:09:54.972758 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-run" (OuterVolumeSpecName: "var-run") pod "1674fa94-4e99-432d-b701-2d7f94fb323d" (UID: "1674fa94-4e99-432d-b701-2d7f94fb323d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:09:54 crc kubenswrapper[4869]: I0127 10:09:54.972783 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "1674fa94-4e99-432d-b701-2d7f94fb323d" (UID: "1674fa94-4e99-432d-b701-2d7f94fb323d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:09:54 crc kubenswrapper[4869]: I0127 10:09:54.972797 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "1674fa94-4e99-432d-b701-2d7f94fb323d" (UID: "1674fa94-4e99-432d-b701-2d7f94fb323d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:09:54 crc kubenswrapper[4869]: I0127 10:09:54.973728 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1674fa94-4e99-432d-b701-2d7f94fb323d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "1674fa94-4e99-432d-b701-2d7f94fb323d" (UID: "1674fa94-4e99-432d-b701-2d7f94fb323d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:54 crc kubenswrapper[4869]: I0127 10:09:54.974151 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1674fa94-4e99-432d-b701-2d7f94fb323d-scripts" (OuterVolumeSpecName: "scripts") pod "1674fa94-4e99-432d-b701-2d7f94fb323d" (UID: "1674fa94-4e99-432d-b701-2d7f94fb323d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:09:54 crc kubenswrapper[4869]: I0127 10:09:54.975893 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1674fa94-4e99-432d-b701-2d7f94fb323d-kube-api-access-5mn6b" (OuterVolumeSpecName: "kube-api-access-5mn6b") pod "1674fa94-4e99-432d-b701-2d7f94fb323d" (UID: "1674fa94-4e99-432d-b701-2d7f94fb323d"). InnerVolumeSpecName "kube-api-access-5mn6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:55 crc kubenswrapper[4869]: I0127 10:09:55.074264 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1674fa94-4e99-432d-b701-2d7f94fb323d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:55 crc kubenswrapper[4869]: I0127 10:09:55.074292 4869 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1674fa94-4e99-432d-b701-2d7f94fb323d-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:55 crc kubenswrapper[4869]: I0127 10:09:55.074302 4869 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:55 crc kubenswrapper[4869]: I0127 10:09:55.074313 4869 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:55 crc kubenswrapper[4869]: I0127 10:09:55.074322 4869 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1674fa94-4e99-432d-b701-2d7f94fb323d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:55 crc kubenswrapper[4869]: I0127 10:09:55.074331 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mn6b\" (UniqueName: \"kubernetes.io/projected/1674fa94-4e99-432d-b701-2d7f94fb323d-kube-api-access-5mn6b\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:55 crc kubenswrapper[4869]: I0127 10:09:55.554045 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0429a74c-af6a-45f1-9ca2-b66dcd47ca38","Type":"ContainerStarted","Data":"f192d729d17e45cea3828edcb58264946035d6cf2c86d56571b5eab021b7db84"} Jan 27 10:09:55 crc kubenswrapper[4869]: I0127 10:09:55.554340 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0429a74c-af6a-45f1-9ca2-b66dcd47ca38","Type":"ContainerStarted","Data":"a68847cfb695559716fd253f110b1730488ef162d04d805a05b77f7946c719ad"} Jan 27 10:09:55 crc kubenswrapper[4869]: I0127 10:09:55.554350 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0429a74c-af6a-45f1-9ca2-b66dcd47ca38","Type":"ContainerStarted","Data":"cb1796308d41cfb53f27a930a592f4c2757394162be5716ae648ca5388881efa"} Jan 27 10:09:55 crc kubenswrapper[4869]: I0127 10:09:55.554358 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0429a74c-af6a-45f1-9ca2-b66dcd47ca38","Type":"ContainerStarted","Data":"a1a5302313c097063d2888b7435bc2d4c77691148198307296ff3dc6aa188bcb"} Jan 27 10:09:55 crc kubenswrapper[4869]: I0127 10:09:55.554368 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0429a74c-af6a-45f1-9ca2-b66dcd47ca38","Type":"ContainerStarted","Data":"427e04890767133149b6da5e8bf1a971f8ba3e0300d112376a1b1c7569ab722c"} Jan 27 10:09:55 crc kubenswrapper[4869]: I0127 10:09:55.556410 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qf659-config-q8z2f" event={"ID":"1674fa94-4e99-432d-b701-2d7f94fb323d","Type":"ContainerDied","Data":"a9b20a5235508e9f466ba28ed4daa50c41127e4525c6ad9285c8aee84911c457"} Jan 27 10:09:55 crc kubenswrapper[4869]: I0127 10:09:55.556445 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9b20a5235508e9f466ba28ed4daa50c41127e4525c6ad9285c8aee84911c457" Jan 27 10:09:55 crc kubenswrapper[4869]: I0127 10:09:55.556459 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qf659-config-q8z2f" Jan 27 10:09:55 crc kubenswrapper[4869]: I0127 10:09:55.976648 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qf659-config-q8z2f"] Jan 27 10:09:55 crc kubenswrapper[4869]: I0127 10:09:55.984269 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qf659-config-q8z2f"] Jan 27 10:09:56 crc kubenswrapper[4869]: I0127 10:09:56.045035 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1674fa94-4e99-432d-b701-2d7f94fb323d" path="/var/lib/kubelet/pods/1674fa94-4e99-432d-b701-2d7f94fb323d/volumes" Jan 27 10:09:56 crc kubenswrapper[4869]: I0127 10:09:56.581349 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0429a74c-af6a-45f1-9ca2-b66dcd47ca38","Type":"ContainerStarted","Data":"5d26ef32bbc66f7f3285840537254f71d7194c76aff61d0b411936cadb0b0a22"} Jan 27 10:09:56 crc kubenswrapper[4869]: I0127 10:09:56.581811 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0429a74c-af6a-45f1-9ca2-b66dcd47ca38","Type":"ContainerStarted","Data":"0204b82f1cf7283d80c1ea36c1e9257e7cc5d8480760ba85437cef01ebf44303"} Jan 27 10:09:56 crc kubenswrapper[4869]: I0127 10:09:56.585924 4869 generic.go:334] "Generic (PLEG): container finished" podID="97d81ee5-695a-463d-8e02-30d6abcc13c3" containerID="6bf82af922b85d626cea63b3634be750c808560cba053f73ffeec66c8e6f02dd" exitCode=0 Jan 27 10:09:56 crc kubenswrapper[4869]: I0127 10:09:56.585977 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b9v6v" event={"ID":"97d81ee5-695a-463d-8e02-30d6abcc13c3","Type":"ContainerDied","Data":"6bf82af922b85d626cea63b3634be750c808560cba053f73ffeec66c8e6f02dd"} Jan 27 10:09:56 crc kubenswrapper[4869]: I0127 10:09:56.638668 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=35.724242655 podStartE2EDuration="41.638644163s" podCreationTimestamp="2026-01-27 10:09:15 +0000 UTC" firstStartedPulling="2026-01-27 10:09:48.770077365 +0000 UTC m=+957.390501458" lastFinishedPulling="2026-01-27 10:09:54.684478883 +0000 UTC m=+963.304902966" observedRunningTime="2026-01-27 10:09:56.630055309 +0000 UTC m=+965.250479422" watchObservedRunningTime="2026-01-27 10:09:56.638644163 +0000 UTC m=+965.259068256" Jan 27 10:09:56 crc kubenswrapper[4869]: I0127 10:09:56.956564 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-t5rtl"] Jan 27 10:09:56 crc kubenswrapper[4869]: E0127 10:09:56.957362 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1674fa94-4e99-432d-b701-2d7f94fb323d" containerName="ovn-config" Jan 27 10:09:56 crc kubenswrapper[4869]: I0127 10:09:56.957429 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1674fa94-4e99-432d-b701-2d7f94fb323d" containerName="ovn-config" Jan 27 10:09:56 crc kubenswrapper[4869]: E0127 10:09:56.957498 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b6f6e7c-5c2a-46d0-87df-730e291ea02b" containerName="extract-utilities" Jan 27 10:09:56 crc kubenswrapper[4869]: I0127 10:09:56.957551 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b6f6e7c-5c2a-46d0-87df-730e291ea02b" containerName="extract-utilities" Jan 27 10:09:56 crc kubenswrapper[4869]: E0127 10:09:56.957630 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b6f6e7c-5c2a-46d0-87df-730e291ea02b" containerName="extract-content" Jan 27 10:09:56 crc kubenswrapper[4869]: I0127 10:09:56.957688 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b6f6e7c-5c2a-46d0-87df-730e291ea02b" containerName="extract-content" Jan 27 10:09:56 crc kubenswrapper[4869]: E0127 10:09:56.957743 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b6f6e7c-5c2a-46d0-87df-730e291ea02b" containerName="registry-server" Jan 27 10:09:56 crc kubenswrapper[4869]: I0127 10:09:56.957797 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b6f6e7c-5c2a-46d0-87df-730e291ea02b" containerName="registry-server" Jan 27 10:09:56 crc kubenswrapper[4869]: I0127 10:09:56.958001 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b6f6e7c-5c2a-46d0-87df-730e291ea02b" containerName="registry-server" Jan 27 10:09:56 crc kubenswrapper[4869]: I0127 10:09:56.958087 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1674fa94-4e99-432d-b701-2d7f94fb323d" containerName="ovn-config" Jan 27 10:09:56 crc kubenswrapper[4869]: I0127 10:09:56.958888 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:56 crc kubenswrapper[4869]: I0127 10:09:56.961426 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 27 10:09:56 crc kubenswrapper[4869]: I0127 10:09:56.976643 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-t5rtl"] Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.105969 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.106058 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.106159 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.106222 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-config\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.106297 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnxvd\" (UniqueName: \"kubernetes.io/projected/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-kube-api-access-nnxvd\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.106488 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-dns-svc\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.208768 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-dns-svc\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.208863 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.208886 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.208923 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.208951 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-config\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.208967 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnxvd\" (UniqueName: \"kubernetes.io/projected/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-kube-api-access-nnxvd\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.210107 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-config\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.210103 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.210139 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.210443 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-dns-svc\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.210682 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.253823 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnxvd\" (UniqueName: \"kubernetes.io/projected/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-kube-api-access-nnxvd\") pod \"dnsmasq-dns-764c5664d7-t5rtl\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.277603 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.718452 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-t5rtl"] Jan 27 10:09:57 crc kubenswrapper[4869]: W0127 10:09:57.723708 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fe1cb7a_8b84_46b2_8c33_94c8808d3ff6.slice/crio-a75665aef7d6b738e32dc02c90d58a965e5a49f712425949eff4f35352b63aed WatchSource:0}: Error finding container a75665aef7d6b738e32dc02c90d58a965e5a49f712425949eff4f35352b63aed: Status 404 returned error can't find the container with id a75665aef7d6b738e32dc02c90d58a965e5a49f712425949eff4f35352b63aed Jan 27 10:09:57 crc kubenswrapper[4869]: I0127 10:09:57.984347 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b9v6v" Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.146188 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-config-data\") pod \"97d81ee5-695a-463d-8e02-30d6abcc13c3\" (UID: \"97d81ee5-695a-463d-8e02-30d6abcc13c3\") " Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.146247 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-combined-ca-bundle\") pod \"97d81ee5-695a-463d-8e02-30d6abcc13c3\" (UID: \"97d81ee5-695a-463d-8e02-30d6abcc13c3\") " Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.146427 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq8dj\" (UniqueName: \"kubernetes.io/projected/97d81ee5-695a-463d-8e02-30d6abcc13c3-kube-api-access-zq8dj\") pod \"97d81ee5-695a-463d-8e02-30d6abcc13c3\" (UID: \"97d81ee5-695a-463d-8e02-30d6abcc13c3\") " Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.146465 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-db-sync-config-data\") pod \"97d81ee5-695a-463d-8e02-30d6abcc13c3\" (UID: \"97d81ee5-695a-463d-8e02-30d6abcc13c3\") " Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.151214 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "97d81ee5-695a-463d-8e02-30d6abcc13c3" (UID: "97d81ee5-695a-463d-8e02-30d6abcc13c3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.151284 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d81ee5-695a-463d-8e02-30d6abcc13c3-kube-api-access-zq8dj" (OuterVolumeSpecName: "kube-api-access-zq8dj") pod "97d81ee5-695a-463d-8e02-30d6abcc13c3" (UID: "97d81ee5-695a-463d-8e02-30d6abcc13c3"). InnerVolumeSpecName "kube-api-access-zq8dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.167821 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97d81ee5-695a-463d-8e02-30d6abcc13c3" (UID: "97d81ee5-695a-463d-8e02-30d6abcc13c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.203192 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-config-data" (OuterVolumeSpecName: "config-data") pod "97d81ee5-695a-463d-8e02-30d6abcc13c3" (UID: "97d81ee5-695a-463d-8e02-30d6abcc13c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.247802 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq8dj\" (UniqueName: \"kubernetes.io/projected/97d81ee5-695a-463d-8e02-30d6abcc13c3-kube-api-access-zq8dj\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.247850 4869 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.247861 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.247869 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97d81ee5-695a-463d-8e02-30d6abcc13c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.601305 4869 generic.go:334] "Generic (PLEG): container finished" podID="5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6" containerID="f720cc83a70c82ea1a840e2afcf22697be62438d80c6272905bcd56f30f5ab88" exitCode=0 Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.601368 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" event={"ID":"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6","Type":"ContainerDied","Data":"f720cc83a70c82ea1a840e2afcf22697be62438d80c6272905bcd56f30f5ab88"} Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.601394 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" event={"ID":"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6","Type":"ContainerStarted","Data":"a75665aef7d6b738e32dc02c90d58a965e5a49f712425949eff4f35352b63aed"} Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.605012 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b9v6v" Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.605201 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b9v6v" event={"ID":"97d81ee5-695a-463d-8e02-30d6abcc13c3","Type":"ContainerDied","Data":"530c09bb9c1fc773306af705649acca3d4aa82fae7d23dac98ad62a6a279bd2f"} Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.605246 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="530c09bb9c1fc773306af705649acca3d4aa82fae7d23dac98ad62a6a279bd2f" Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.609796 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" containerID="1f1aa0bf19a32c776bf2f83a31ed4d652e4843944a42736153fcccba8078a367" exitCode=0 Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.609945 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerDied","Data":"1f1aa0bf19a32c776bf2f83a31ed4d652e4843944a42736153fcccba8078a367"} Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.609980 4869 scope.go:117] "RemoveContainer" containerID="713cbe5ddc293222c05cc4d2e1f4a343d1d98adff94eae20f18043cdb0dd6332" Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.610559 4869 scope.go:117] "RemoveContainer" containerID="1f1aa0bf19a32c776bf2f83a31ed4d652e4843944a42736153fcccba8078a367" Jan 27 10:09:58 crc kubenswrapper[4869]: E0127 10:09:58.610748 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 20s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.615864 4869 generic.go:334] "Generic (PLEG): container finished" podID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" containerID="2c871af1399c1a18f5848ed5794243ac637b38c0d4335182714036f73dd2458c" exitCode=0 Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.615898 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerDied","Data":"2c871af1399c1a18f5848ed5794243ac637b38c0d4335182714036f73dd2458c"} Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.628226 4869 scope.go:117] "RemoveContainer" containerID="2c871af1399c1a18f5848ed5794243ac637b38c0d4335182714036f73dd2458c" Jan 27 10:09:58 crc kubenswrapper[4869]: E0127 10:09:58.630326 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 20s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:09:58 crc kubenswrapper[4869]: I0127 10:09:58.731372 4869 scope.go:117] "RemoveContainer" containerID="a63333c886f71c4589bfb134abdcbf86ceea0ce23a1b3cc0ee0a818c84c74df8" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.048908 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-t5rtl"] Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.106442 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-lvhqh"] Jan 27 10:09:59 crc kubenswrapper[4869]: E0127 10:09:59.106807 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d81ee5-695a-463d-8e02-30d6abcc13c3" containerName="glance-db-sync" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.106843 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d81ee5-695a-463d-8e02-30d6abcc13c3" containerName="glance-db-sync" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.106997 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d81ee5-695a-463d-8e02-30d6abcc13c3" containerName="glance-db-sync" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.107790 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.123809 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-lvhqh"] Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.167175 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ade660e6-68ee-4d24-a454-26bbb5f89008-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.167223 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ade660e6-68ee-4d24-a454-26bbb5f89008-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.167508 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ade660e6-68ee-4d24-a454-26bbb5f89008-config\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.167606 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ade660e6-68ee-4d24-a454-26bbb5f89008-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.167665 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btshh\" (UniqueName: \"kubernetes.io/projected/ade660e6-68ee-4d24-a454-26bbb5f89008-kube-api-access-btshh\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.167877 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ade660e6-68ee-4d24-a454-26bbb5f89008-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.268804 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ade660e6-68ee-4d24-a454-26bbb5f89008-config\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.268868 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ade660e6-68ee-4d24-a454-26bbb5f89008-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.268896 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btshh\" (UniqueName: \"kubernetes.io/projected/ade660e6-68ee-4d24-a454-26bbb5f89008-kube-api-access-btshh\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.268941 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ade660e6-68ee-4d24-a454-26bbb5f89008-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.268994 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ade660e6-68ee-4d24-a454-26bbb5f89008-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.269012 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ade660e6-68ee-4d24-a454-26bbb5f89008-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.269802 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ade660e6-68ee-4d24-a454-26bbb5f89008-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.270312 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ade660e6-68ee-4d24-a454-26bbb5f89008-config\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.270792 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ade660e6-68ee-4d24-a454-26bbb5f89008-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.271625 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ade660e6-68ee-4d24-a454-26bbb5f89008-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.272120 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ade660e6-68ee-4d24-a454-26bbb5f89008-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.288986 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btshh\" (UniqueName: \"kubernetes.io/projected/ade660e6-68ee-4d24-a454-26bbb5f89008-kube-api-access-btshh\") pod \"dnsmasq-dns-74f6bcbc87-lvhqh\" (UID: \"ade660e6-68ee-4d24-a454-26bbb5f89008\") " pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.421265 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.627088 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" event={"ID":"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6","Type":"ContainerStarted","Data":"b6202550f1b635e043d5a947a5ccf7cd46a710ae95b5964cc29c492c2c17fd2c"} Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.627439 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.647954 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" podStartSLOduration=3.64793923 podStartE2EDuration="3.64793923s" podCreationTimestamp="2026-01-27 10:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:09:59.645678183 +0000 UTC m=+968.266102266" watchObservedRunningTime="2026-01-27 10:09:59.64793923 +0000 UTC m=+968.268363313" Jan 27 10:09:59 crc kubenswrapper[4869]: I0127 10:09:59.825239 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-lvhqh"] Jan 27 10:09:59 crc kubenswrapper[4869]: W0127 10:09:59.832558 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podade660e6_68ee_4d24_a454_26bbb5f89008.slice/crio-9cca5eb2c45968d63ba6dc1b8e0a42e9d306e8a6d389b1c62f451d83f4944def WatchSource:0}: Error finding container 9cca5eb2c45968d63ba6dc1b8e0a42e9d306e8a6d389b1c62f451d83f4944def: Status 404 returned error can't find the container with id 9cca5eb2c45968d63ba6dc1b8e0a42e9d306e8a6d389b1c62f451d83f4944def Jan 27 10:10:00 crc kubenswrapper[4869]: I0127 10:10:00.636711 4869 generic.go:334] "Generic (PLEG): container finished" podID="ade660e6-68ee-4d24-a454-26bbb5f89008" containerID="c6c175546ae92098549118e4931548e68a216cc63fd78523f9626dcfb26f7039" exitCode=0 Jan 27 10:10:00 crc kubenswrapper[4869]: I0127 10:10:00.636814 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" event={"ID":"ade660e6-68ee-4d24-a454-26bbb5f89008","Type":"ContainerDied","Data":"c6c175546ae92098549118e4931548e68a216cc63fd78523f9626dcfb26f7039"} Jan 27 10:10:00 crc kubenswrapper[4869]: I0127 10:10:00.637201 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" event={"ID":"ade660e6-68ee-4d24-a454-26bbb5f89008","Type":"ContainerStarted","Data":"9cca5eb2c45968d63ba6dc1b8e0a42e9d306e8a6d389b1c62f451d83f4944def"} Jan 27 10:10:00 crc kubenswrapper[4869]: I0127 10:10:00.637468 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" podUID="5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6" containerName="dnsmasq-dns" containerID="cri-o://b6202550f1b635e043d5a947a5ccf7cd46a710ae95b5964cc29c492c2c17fd2c" gracePeriod=10 Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.085702 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.222483 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-dns-svc\") pod \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.222654 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnxvd\" (UniqueName: \"kubernetes.io/projected/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-kube-api-access-nnxvd\") pod \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.222737 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-ovsdbserver-sb\") pod \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.222791 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-ovsdbserver-nb\") pod \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.222883 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-dns-swift-storage-0\") pod \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.222927 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-config\") pod \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\" (UID: \"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6\") " Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.233018 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-kube-api-access-nnxvd" (OuterVolumeSpecName: "kube-api-access-nnxvd") pod "5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6" (UID: "5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6"). InnerVolumeSpecName "kube-api-access-nnxvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.268507 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6" (UID: "5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.288159 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6" (UID: "5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.299160 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6" (UID: "5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.306263 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-config" (OuterVolumeSpecName: "config") pod "5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6" (UID: "5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.310609 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6" (UID: "5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.325553 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.325590 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.325601 4869 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.325613 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-config\") on node \"crc\" DevicePath \"\"" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.325625 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.325636 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnxvd\" (UniqueName: \"kubernetes.io/projected/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6-kube-api-access-nnxvd\") on node \"crc\" DevicePath \"\"" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.647656 4869 generic.go:334] "Generic (PLEG): container finished" podID="5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6" containerID="b6202550f1b635e043d5a947a5ccf7cd46a710ae95b5964cc29c492c2c17fd2c" exitCode=0 Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.647691 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.647747 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" event={"ID":"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6","Type":"ContainerDied","Data":"b6202550f1b635e043d5a947a5ccf7cd46a710ae95b5964cc29c492c2c17fd2c"} Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.647790 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-t5rtl" event={"ID":"5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6","Type":"ContainerDied","Data":"a75665aef7d6b738e32dc02c90d58a965e5a49f712425949eff4f35352b63aed"} Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.647810 4869 scope.go:117] "RemoveContainer" containerID="b6202550f1b635e043d5a947a5ccf7cd46a710ae95b5964cc29c492c2c17fd2c" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.650472 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" event={"ID":"ade660e6-68ee-4d24-a454-26bbb5f89008","Type":"ContainerStarted","Data":"a91fab99f1d92adae438485a9c2423014762930b248f3b7609c449d8f9799d3c"} Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.650694 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.682492 4869 scope.go:117] "RemoveContainer" containerID="f720cc83a70c82ea1a840e2afcf22697be62438d80c6272905bcd56f30f5ab88" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.694959 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" podStartSLOduration=2.69493844 podStartE2EDuration="2.69493844s" podCreationTimestamp="2026-01-27 10:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 10:10:01.676929246 +0000 UTC m=+970.297353329" watchObservedRunningTime="2026-01-27 10:10:01.69493844 +0000 UTC m=+970.315362523" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.705476 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-t5rtl"] Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.706929 4869 scope.go:117] "RemoveContainer" containerID="b6202550f1b635e043d5a947a5ccf7cd46a710ae95b5964cc29c492c2c17fd2c" Jan 27 10:10:01 crc kubenswrapper[4869]: E0127 10:10:01.707574 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6202550f1b635e043d5a947a5ccf7cd46a710ae95b5964cc29c492c2c17fd2c\": container with ID starting with b6202550f1b635e043d5a947a5ccf7cd46a710ae95b5964cc29c492c2c17fd2c not found: ID does not exist" containerID="b6202550f1b635e043d5a947a5ccf7cd46a710ae95b5964cc29c492c2c17fd2c" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.707626 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6202550f1b635e043d5a947a5ccf7cd46a710ae95b5964cc29c492c2c17fd2c"} err="failed to get container status \"b6202550f1b635e043d5a947a5ccf7cd46a710ae95b5964cc29c492c2c17fd2c\": rpc error: code = NotFound desc = could not find container \"b6202550f1b635e043d5a947a5ccf7cd46a710ae95b5964cc29c492c2c17fd2c\": container with ID starting with b6202550f1b635e043d5a947a5ccf7cd46a710ae95b5964cc29c492c2c17fd2c not found: ID does not exist" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.707661 4869 scope.go:117] "RemoveContainer" containerID="f720cc83a70c82ea1a840e2afcf22697be62438d80c6272905bcd56f30f5ab88" Jan 27 10:10:01 crc kubenswrapper[4869]: E0127 10:10:01.708110 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f720cc83a70c82ea1a840e2afcf22697be62438d80c6272905bcd56f30f5ab88\": container with ID starting with f720cc83a70c82ea1a840e2afcf22697be62438d80c6272905bcd56f30f5ab88 not found: ID does not exist" containerID="f720cc83a70c82ea1a840e2afcf22697be62438d80c6272905bcd56f30f5ab88" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.708148 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f720cc83a70c82ea1a840e2afcf22697be62438d80c6272905bcd56f30f5ab88"} err="failed to get container status \"f720cc83a70c82ea1a840e2afcf22697be62438d80c6272905bcd56f30f5ab88\": rpc error: code = NotFound desc = could not find container \"f720cc83a70c82ea1a840e2afcf22697be62438d80c6272905bcd56f30f5ab88\": container with ID starting with f720cc83a70c82ea1a840e2afcf22697be62438d80c6272905bcd56f30f5ab88 not found: ID does not exist" Jan 27 10:10:01 crc kubenswrapper[4869]: I0127 10:10:01.711548 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-t5rtl"] Jan 27 10:10:02 crc kubenswrapper[4869]: I0127 10:10:02.043736 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6" path="/var/lib/kubelet/pods/5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6/volumes" Jan 27 10:10:09 crc kubenswrapper[4869]: I0127 10:10:09.422939 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-lvhqh" Jan 27 10:10:09 crc kubenswrapper[4869]: I0127 10:10:09.501383 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-4z8cs"] Jan 27 10:10:09 crc kubenswrapper[4869]: I0127 10:10:09.501756 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-4z8cs" podUID="b6cd084b-c383-4201-972c-227cedc088a4" containerName="dnsmasq-dns" containerID="cri-o://9e5b64ae8a8eeb0182776baef2bdc04823c0fa0b338e226f243f10a42b151827" gracePeriod=10 Jan 27 10:10:09 crc kubenswrapper[4869]: I0127 10:10:09.735707 4869 generic.go:334] "Generic (PLEG): container finished" podID="b6cd084b-c383-4201-972c-227cedc088a4" containerID="9e5b64ae8a8eeb0182776baef2bdc04823c0fa0b338e226f243f10a42b151827" exitCode=0 Jan 27 10:10:09 crc kubenswrapper[4869]: I0127 10:10:09.735747 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-4z8cs" event={"ID":"b6cd084b-c383-4201-972c-227cedc088a4","Type":"ContainerDied","Data":"9e5b64ae8a8eeb0182776baef2bdc04823c0fa0b338e226f243f10a42b151827"} Jan 27 10:10:09 crc kubenswrapper[4869]: I0127 10:10:09.979937 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.033511 4869 scope.go:117] "RemoveContainer" containerID="2c871af1399c1a18f5848ed5794243ac637b38c0d4335182714036f73dd2458c" Jan 27 10:10:10 crc kubenswrapper[4869]: E0127 10:10:10.033882 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 20s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.082282 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-ovsdbserver-nb\") pod \"b6cd084b-c383-4201-972c-227cedc088a4\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.082349 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-dns-svc\") pod \"b6cd084b-c383-4201-972c-227cedc088a4\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.082389 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcg5m\" (UniqueName: \"kubernetes.io/projected/b6cd084b-c383-4201-972c-227cedc088a4-kube-api-access-xcg5m\") pod \"b6cd084b-c383-4201-972c-227cedc088a4\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.082440 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-ovsdbserver-sb\") pod \"b6cd084b-c383-4201-972c-227cedc088a4\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.082523 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-config\") pod \"b6cd084b-c383-4201-972c-227cedc088a4\" (UID: \"b6cd084b-c383-4201-972c-227cedc088a4\") " Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.096619 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd084b-c383-4201-972c-227cedc088a4-kube-api-access-xcg5m" (OuterVolumeSpecName: "kube-api-access-xcg5m") pod "b6cd084b-c383-4201-972c-227cedc088a4" (UID: "b6cd084b-c383-4201-972c-227cedc088a4"). InnerVolumeSpecName "kube-api-access-xcg5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.119710 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b6cd084b-c383-4201-972c-227cedc088a4" (UID: "b6cd084b-c383-4201-972c-227cedc088a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.119722 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-config" (OuterVolumeSpecName: "config") pod "b6cd084b-c383-4201-972c-227cedc088a4" (UID: "b6cd084b-c383-4201-972c-227cedc088a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.120869 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b6cd084b-c383-4201-972c-227cedc088a4" (UID: "b6cd084b-c383-4201-972c-227cedc088a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.123592 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b6cd084b-c383-4201-972c-227cedc088a4" (UID: "b6cd084b-c383-4201-972c-227cedc088a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.186623 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.186994 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.187121 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.187142 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcg5m\" (UniqueName: \"kubernetes.io/projected/b6cd084b-c383-4201-972c-227cedc088a4-kube-api-access-xcg5m\") on node \"crc\" DevicePath \"\"" Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.187153 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6cd084b-c383-4201-972c-227cedc088a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.747466 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-4z8cs" event={"ID":"b6cd084b-c383-4201-972c-227cedc088a4","Type":"ContainerDied","Data":"c93573e658a0fef38c215bc59a0f2c30cff15aba9a7874971d0ed29508741497"} Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.747551 4869 scope.go:117] "RemoveContainer" containerID="9e5b64ae8a8eeb0182776baef2bdc04823c0fa0b338e226f243f10a42b151827" Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.747765 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-4z8cs" Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.785119 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-4z8cs"] Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.789492 4869 scope.go:117] "RemoveContainer" containerID="9bc93b923bcee84df645c6a0144d72da0afd1f937cbcff7c569af3135b86dba9" Jan 27 10:10:10 crc kubenswrapper[4869]: I0127 10:10:10.790969 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-4z8cs"] Jan 27 10:10:12 crc kubenswrapper[4869]: I0127 10:10:12.036998 4869 scope.go:117] "RemoveContainer" containerID="1f1aa0bf19a32c776bf2f83a31ed4d652e4843944a42736153fcccba8078a367" Jan 27 10:10:12 crc kubenswrapper[4869]: E0127 10:10:12.037440 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 20s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:10:12 crc kubenswrapper[4869]: I0127 10:10:12.050304 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd084b-c383-4201-972c-227cedc088a4" path="/var/lib/kubelet/pods/b6cd084b-c383-4201-972c-227cedc088a4/volumes" Jan 27 10:10:24 crc kubenswrapper[4869]: I0127 10:10:24.034366 4869 scope.go:117] "RemoveContainer" containerID="2c871af1399c1a18f5848ed5794243ac637b38c0d4335182714036f73dd2458c" Jan 27 10:10:24 crc kubenswrapper[4869]: I0127 10:10:24.872735 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerStarted","Data":"fbe3fc63596c7cd0e2da252501842ec3601b01d472c82b1a750907a5f5c7f0e6"} Jan 27 10:10:24 crc kubenswrapper[4869]: I0127 10:10:24.873256 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 10:10:27 crc kubenswrapper[4869]: I0127 10:10:27.033384 4869 scope.go:117] "RemoveContainer" containerID="1f1aa0bf19a32c776bf2f83a31ed4d652e4843944a42736153fcccba8078a367" Jan 27 10:10:27 crc kubenswrapper[4869]: I0127 10:10:27.901053 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerStarted","Data":"a5952901e0468f191ffdbbecb044c0ceeed69ccfcea81b33d9758f3371854846"} Jan 27 10:10:27 crc kubenswrapper[4869]: I0127 10:10:27.901712 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:10:28 crc kubenswrapper[4869]: I0127 10:10:28.915077 4869 generic.go:334] "Generic (PLEG): container finished" podID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" containerID="fbe3fc63596c7cd0e2da252501842ec3601b01d472c82b1a750907a5f5c7f0e6" exitCode=0 Jan 27 10:10:28 crc kubenswrapper[4869]: I0127 10:10:28.915139 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerDied","Data":"fbe3fc63596c7cd0e2da252501842ec3601b01d472c82b1a750907a5f5c7f0e6"} Jan 27 10:10:28 crc kubenswrapper[4869]: I0127 10:10:28.917205 4869 scope.go:117] "RemoveContainer" containerID="2c871af1399c1a18f5848ed5794243ac637b38c0d4335182714036f73dd2458c" Jan 27 10:10:28 crc kubenswrapper[4869]: I0127 10:10:28.918022 4869 scope.go:117] "RemoveContainer" containerID="fbe3fc63596c7cd0e2da252501842ec3601b01d472c82b1a750907a5f5c7f0e6" Jan 27 10:10:28 crc kubenswrapper[4869]: E0127 10:10:28.918272 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 40s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:10:31 crc kubenswrapper[4869]: I0127 10:10:31.951343 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" containerID="a5952901e0468f191ffdbbecb044c0ceeed69ccfcea81b33d9758f3371854846" exitCode=0 Jan 27 10:10:31 crc kubenswrapper[4869]: I0127 10:10:31.951454 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerDied","Data":"a5952901e0468f191ffdbbecb044c0ceeed69ccfcea81b33d9758f3371854846"} Jan 27 10:10:31 crc kubenswrapper[4869]: I0127 10:10:31.951733 4869 scope.go:117] "RemoveContainer" containerID="1f1aa0bf19a32c776bf2f83a31ed4d652e4843944a42736153fcccba8078a367" Jan 27 10:10:31 crc kubenswrapper[4869]: I0127 10:10:31.952517 4869 scope.go:117] "RemoveContainer" containerID="a5952901e0468f191ffdbbecb044c0ceeed69ccfcea81b33d9758f3371854846" Jan 27 10:10:31 crc kubenswrapper[4869]: E0127 10:10:31.956378 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 40s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:10:41 crc kubenswrapper[4869]: I0127 10:10:41.033513 4869 scope.go:117] "RemoveContainer" containerID="fbe3fc63596c7cd0e2da252501842ec3601b01d472c82b1a750907a5f5c7f0e6" Jan 27 10:10:41 crc kubenswrapper[4869]: E0127 10:10:41.034248 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 40s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:10:46 crc kubenswrapper[4869]: I0127 10:10:46.032982 4869 scope.go:117] "RemoveContainer" containerID="a5952901e0468f191ffdbbecb044c0ceeed69ccfcea81b33d9758f3371854846" Jan 27 10:10:46 crc kubenswrapper[4869]: E0127 10:10:46.034279 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 40s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:10:55 crc kubenswrapper[4869]: I0127 10:10:55.032746 4869 scope.go:117] "RemoveContainer" containerID="fbe3fc63596c7cd0e2da252501842ec3601b01d472c82b1a750907a5f5c7f0e6" Jan 27 10:10:55 crc kubenswrapper[4869]: E0127 10:10:55.034202 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 40s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:11:00 crc kubenswrapper[4869]: I0127 10:11:00.033713 4869 scope.go:117] "RemoveContainer" containerID="a5952901e0468f191ffdbbecb044c0ceeed69ccfcea81b33d9758f3371854846" Jan 27 10:11:00 crc kubenswrapper[4869]: E0127 10:11:00.035274 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 40s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:11:10 crc kubenswrapper[4869]: I0127 10:11:10.034040 4869 scope.go:117] "RemoveContainer" containerID="fbe3fc63596c7cd0e2da252501842ec3601b01d472c82b1a750907a5f5c7f0e6" Jan 27 10:11:11 crc kubenswrapper[4869]: I0127 10:11:11.337354 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerStarted","Data":"b06088ce5aab87dcbb61f6e3149cc572830dc1ba482f90d029a9c441e803a4f0"} Jan 27 10:11:11 crc kubenswrapper[4869]: I0127 10:11:11.338122 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 10:11:12 crc kubenswrapper[4869]: I0127 10:11:12.045001 4869 scope.go:117] "RemoveContainer" containerID="a5952901e0468f191ffdbbecb044c0ceeed69ccfcea81b33d9758f3371854846" Jan 27 10:11:12 crc kubenswrapper[4869]: I0127 10:11:12.349622 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerStarted","Data":"230a8092969eda3296bb95c8e08f044e7341bed5857d960b73708df66084b0b9"} Jan 27 10:11:13 crc kubenswrapper[4869]: I0127 10:11:13.356101 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:11:15 crc kubenswrapper[4869]: I0127 10:11:15.374624 4869 generic.go:334] "Generic (PLEG): container finished" podID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" containerID="b06088ce5aab87dcbb61f6e3149cc572830dc1ba482f90d029a9c441e803a4f0" exitCode=0 Jan 27 10:11:15 crc kubenswrapper[4869]: I0127 10:11:15.374665 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerDied","Data":"b06088ce5aab87dcbb61f6e3149cc572830dc1ba482f90d029a9c441e803a4f0"} Jan 27 10:11:15 crc kubenswrapper[4869]: I0127 10:11:15.374957 4869 scope.go:117] "RemoveContainer" containerID="fbe3fc63596c7cd0e2da252501842ec3601b01d472c82b1a750907a5f5c7f0e6" Jan 27 10:11:15 crc kubenswrapper[4869]: I0127 10:11:15.375553 4869 scope.go:117] "RemoveContainer" containerID="b06088ce5aab87dcbb61f6e3149cc572830dc1ba482f90d029a9c441e803a4f0" Jan 27 10:11:15 crc kubenswrapper[4869]: E0127 10:11:15.375759 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:11:16 crc kubenswrapper[4869]: I0127 10:11:16.385728 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" containerID="230a8092969eda3296bb95c8e08f044e7341bed5857d960b73708df66084b0b9" exitCode=0 Jan 27 10:11:16 crc kubenswrapper[4869]: I0127 10:11:16.385785 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerDied","Data":"230a8092969eda3296bb95c8e08f044e7341bed5857d960b73708df66084b0b9"} Jan 27 10:11:16 crc kubenswrapper[4869]: I0127 10:11:16.385845 4869 scope.go:117] "RemoveContainer" containerID="a5952901e0468f191ffdbbecb044c0ceeed69ccfcea81b33d9758f3371854846" Jan 27 10:11:16 crc kubenswrapper[4869]: I0127 10:11:16.386656 4869 scope.go:117] "RemoveContainer" containerID="230a8092969eda3296bb95c8e08f044e7341bed5857d960b73708df66084b0b9" Jan 27 10:11:16 crc kubenswrapper[4869]: E0127 10:11:16.386986 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:11:26 crc kubenswrapper[4869]: I0127 10:11:26.036725 4869 scope.go:117] "RemoveContainer" containerID="b06088ce5aab87dcbb61f6e3149cc572830dc1ba482f90d029a9c441e803a4f0" Jan 27 10:11:26 crc kubenswrapper[4869]: E0127 10:11:26.037537 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:11:31 crc kubenswrapper[4869]: I0127 10:11:31.032978 4869 scope.go:117] "RemoveContainer" containerID="230a8092969eda3296bb95c8e08f044e7341bed5857d960b73708df66084b0b9" Jan 27 10:11:31 crc kubenswrapper[4869]: E0127 10:11:31.033876 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:11:37 crc kubenswrapper[4869]: I0127 10:11:37.033240 4869 scope.go:117] "RemoveContainer" containerID="b06088ce5aab87dcbb61f6e3149cc572830dc1ba482f90d029a9c441e803a4f0" Jan 27 10:11:37 crc kubenswrapper[4869]: E0127 10:11:37.034525 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:11:43 crc kubenswrapper[4869]: I0127 10:11:43.033914 4869 scope.go:117] "RemoveContainer" containerID="230a8092969eda3296bb95c8e08f044e7341bed5857d960b73708df66084b0b9" Jan 27 10:11:43 crc kubenswrapper[4869]: E0127 10:11:43.034447 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:11:48 crc kubenswrapper[4869]: I0127 10:11:48.033344 4869 scope.go:117] "RemoveContainer" containerID="b06088ce5aab87dcbb61f6e3149cc572830dc1ba482f90d029a9c441e803a4f0" Jan 27 10:11:48 crc kubenswrapper[4869]: E0127 10:11:48.034207 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:11:58 crc kubenswrapper[4869]: I0127 10:11:58.033413 4869 scope.go:117] "RemoveContainer" containerID="230a8092969eda3296bb95c8e08f044e7341bed5857d960b73708df66084b0b9" Jan 27 10:11:58 crc kubenswrapper[4869]: E0127 10:11:58.034327 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:12:02 crc kubenswrapper[4869]: I0127 10:12:02.037543 4869 scope.go:117] "RemoveContainer" containerID="b06088ce5aab87dcbb61f6e3149cc572830dc1ba482f90d029a9c441e803a4f0" Jan 27 10:12:02 crc kubenswrapper[4869]: E0127 10:12:02.038207 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:12:12 crc kubenswrapper[4869]: I0127 10:12:12.044015 4869 scope.go:117] "RemoveContainer" containerID="230a8092969eda3296bb95c8e08f044e7341bed5857d960b73708df66084b0b9" Jan 27 10:12:12 crc kubenswrapper[4869]: E0127 10:12:12.046750 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:12:14 crc kubenswrapper[4869]: I0127 10:12:14.033343 4869 scope.go:117] "RemoveContainer" containerID="b06088ce5aab87dcbb61f6e3149cc572830dc1ba482f90d029a9c441e803a4f0" Jan 27 10:12:14 crc kubenswrapper[4869]: E0127 10:12:14.033785 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:12:15 crc kubenswrapper[4869]: I0127 10:12:15.698259 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:12:15 crc kubenswrapper[4869]: I0127 10:12:15.698964 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:12:27 crc kubenswrapper[4869]: I0127 10:12:27.032927 4869 scope.go:117] "RemoveContainer" containerID="230a8092969eda3296bb95c8e08f044e7341bed5857d960b73708df66084b0b9" Jan 27 10:12:27 crc kubenswrapper[4869]: E0127 10:12:27.033932 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:12:28 crc kubenswrapper[4869]: I0127 10:12:28.033124 4869 scope.go:117] "RemoveContainer" containerID="b06088ce5aab87dcbb61f6e3149cc572830dc1ba482f90d029a9c441e803a4f0" Jan 27 10:12:28 crc kubenswrapper[4869]: E0127 10:12:28.033962 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:12:38 crc kubenswrapper[4869]: I0127 10:12:38.033091 4869 scope.go:117] "RemoveContainer" containerID="230a8092969eda3296bb95c8e08f044e7341bed5857d960b73708df66084b0b9" Jan 27 10:12:39 crc kubenswrapper[4869]: I0127 10:12:39.163264 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerStarted","Data":"739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324"} Jan 27 10:12:39 crc kubenswrapper[4869]: I0127 10:12:39.164213 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:12:40 crc kubenswrapper[4869]: I0127 10:12:40.033304 4869 scope.go:117] "RemoveContainer" containerID="b06088ce5aab87dcbb61f6e3149cc572830dc1ba482f90d029a9c441e803a4f0" Jan 27 10:12:41 crc kubenswrapper[4869]: I0127 10:12:41.179927 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerStarted","Data":"8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0"} Jan 27 10:12:41 crc kubenswrapper[4869]: I0127 10:12:41.180127 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 10:12:43 crc kubenswrapper[4869]: I0127 10:12:43.204068 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" containerID="739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324" exitCode=0 Jan 27 10:12:43 crc kubenswrapper[4869]: I0127 10:12:43.204173 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerDied","Data":"739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324"} Jan 27 10:12:43 crc kubenswrapper[4869]: I0127 10:12:43.204514 4869 scope.go:117] "RemoveContainer" containerID="230a8092969eda3296bb95c8e08f044e7341bed5857d960b73708df66084b0b9" Jan 27 10:12:43 crc kubenswrapper[4869]: I0127 10:12:43.205529 4869 scope.go:117] "RemoveContainer" containerID="739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324" Jan 27 10:12:43 crc kubenswrapper[4869]: E0127 10:12:43.206095 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:12:45 crc kubenswrapper[4869]: I0127 10:12:45.225187 4869 generic.go:334] "Generic (PLEG): container finished" podID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" containerID="8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0" exitCode=0 Jan 27 10:12:45 crc kubenswrapper[4869]: I0127 10:12:45.225259 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerDied","Data":"8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0"} Jan 27 10:12:45 crc kubenswrapper[4869]: I0127 10:12:45.225333 4869 scope.go:117] "RemoveContainer" containerID="b06088ce5aab87dcbb61f6e3149cc572830dc1ba482f90d029a9c441e803a4f0" Jan 27 10:12:45 crc kubenswrapper[4869]: I0127 10:12:45.226236 4869 scope.go:117] "RemoveContainer" containerID="8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0" Jan 27 10:12:45 crc kubenswrapper[4869]: E0127 10:12:45.226636 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:12:45 crc kubenswrapper[4869]: I0127 10:12:45.697864 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:12:45 crc kubenswrapper[4869]: I0127 10:12:45.698177 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:12:54 crc kubenswrapper[4869]: I0127 10:12:54.033899 4869 scope.go:117] "RemoveContainer" containerID="739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324" Jan 27 10:12:54 crc kubenswrapper[4869]: E0127 10:12:54.034799 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:13:01 crc kubenswrapper[4869]: I0127 10:13:01.033728 4869 scope.go:117] "RemoveContainer" containerID="8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0" Jan 27 10:13:01 crc kubenswrapper[4869]: E0127 10:13:01.034702 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:13:08 crc kubenswrapper[4869]: I0127 10:13:08.033093 4869 scope.go:117] "RemoveContainer" containerID="739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324" Jan 27 10:13:08 crc kubenswrapper[4869]: E0127 10:13:08.033709 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:13:13 crc kubenswrapper[4869]: I0127 10:13:13.032911 4869 scope.go:117] "RemoveContainer" containerID="8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0" Jan 27 10:13:13 crc kubenswrapper[4869]: E0127 10:13:13.033766 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:13:15 crc kubenswrapper[4869]: I0127 10:13:15.697945 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:13:15 crc kubenswrapper[4869]: I0127 10:13:15.698267 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:13:15 crc kubenswrapper[4869]: I0127 10:13:15.698313 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 10:13:15 crc kubenswrapper[4869]: I0127 10:13:15.698998 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b72347200950347d222694240cf88dda5067f82f3f49e7890c07c595718e823"} pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:13:15 crc kubenswrapper[4869]: I0127 10:13:15.699042 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" containerID="cri-o://1b72347200950347d222694240cf88dda5067f82f3f49e7890c07c595718e823" gracePeriod=600 Jan 27 10:13:16 crc kubenswrapper[4869]: I0127 10:13:16.485508 4869 generic.go:334] "Generic (PLEG): container finished" podID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerID="1b72347200950347d222694240cf88dda5067f82f3f49e7890c07c595718e823" exitCode=0 Jan 27 10:13:16 crc kubenswrapper[4869]: I0127 10:13:16.485599 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerDied","Data":"1b72347200950347d222694240cf88dda5067f82f3f49e7890c07c595718e823"} Jan 27 10:13:16 crc kubenswrapper[4869]: I0127 10:13:16.486149 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerStarted","Data":"e42a767d7d8267d9715f57b3658f53bd93ed0ffa29874755bfedd19bddd1072d"} Jan 27 10:13:16 crc kubenswrapper[4869]: I0127 10:13:16.486171 4869 scope.go:117] "RemoveContainer" containerID="dc8a6d1fdbc6b3f8427a05417ce1783a27aac64b6b76b4051c7a781e964cbb0b" Jan 27 10:13:21 crc kubenswrapper[4869]: I0127 10:13:21.033253 4869 scope.go:117] "RemoveContainer" containerID="739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324" Jan 27 10:13:21 crc kubenswrapper[4869]: E0127 10:13:21.034012 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:13:25 crc kubenswrapper[4869]: I0127 10:13:25.034112 4869 scope.go:117] "RemoveContainer" containerID="8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0" Jan 27 10:13:25 crc kubenswrapper[4869]: E0127 10:13:25.035571 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:13:33 crc kubenswrapper[4869]: I0127 10:13:33.033869 4869 scope.go:117] "RemoveContainer" containerID="739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324" Jan 27 10:13:33 crc kubenswrapper[4869]: E0127 10:13:33.034829 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:13:38 crc kubenswrapper[4869]: I0127 10:13:38.035246 4869 scope.go:117] "RemoveContainer" containerID="8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0" Jan 27 10:13:38 crc kubenswrapper[4869]: E0127 10:13:38.036016 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:13:45 crc kubenswrapper[4869]: I0127 10:13:45.033579 4869 scope.go:117] "RemoveContainer" containerID="739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324" Jan 27 10:13:45 crc kubenswrapper[4869]: E0127 10:13:45.034310 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:13:52 crc kubenswrapper[4869]: I0127 10:13:52.037670 4869 scope.go:117] "RemoveContainer" containerID="8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0" Jan 27 10:13:52 crc kubenswrapper[4869]: E0127 10:13:52.038428 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:14:00 crc kubenswrapper[4869]: I0127 10:14:00.034680 4869 scope.go:117] "RemoveContainer" containerID="739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324" Jan 27 10:14:00 crc kubenswrapper[4869]: E0127 10:14:00.036801 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:14:07 crc kubenswrapper[4869]: I0127 10:14:07.032708 4869 scope.go:117] "RemoveContainer" containerID="8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0" Jan 27 10:14:07 crc kubenswrapper[4869]: E0127 10:14:07.033422 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:14:12 crc kubenswrapper[4869]: I0127 10:14:12.039425 4869 scope.go:117] "RemoveContainer" containerID="739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324" Jan 27 10:14:12 crc kubenswrapper[4869]: E0127 10:14:12.040605 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:14:19 crc kubenswrapper[4869]: I0127 10:14:19.034902 4869 scope.go:117] "RemoveContainer" containerID="8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0" Jan 27 10:14:19 crc kubenswrapper[4869]: E0127 10:14:19.035651 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:14:24 crc kubenswrapper[4869]: I0127 10:14:24.034381 4869 scope.go:117] "RemoveContainer" containerID="739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324" Jan 27 10:14:24 crc kubenswrapper[4869]: E0127 10:14:24.035515 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:14:31 crc kubenswrapper[4869]: I0127 10:14:31.033049 4869 scope.go:117] "RemoveContainer" containerID="8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0" Jan 27 10:14:31 crc kubenswrapper[4869]: E0127 10:14:31.033794 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:14:37 crc kubenswrapper[4869]: I0127 10:14:37.033305 4869 scope.go:117] "RemoveContainer" containerID="739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324" Jan 27 10:14:37 crc kubenswrapper[4869]: E0127 10:14:37.034033 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:14:42 crc kubenswrapper[4869]: I0127 10:14:42.037237 4869 scope.go:117] "RemoveContainer" containerID="8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0" Jan 27 10:14:42 crc kubenswrapper[4869]: E0127 10:14:42.037992 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:14:48 crc kubenswrapper[4869]: I0127 10:14:48.033488 4869 scope.go:117] "RemoveContainer" containerID="739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324" Jan 27 10:14:48 crc kubenswrapper[4869]: E0127 10:14:48.034252 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:14:56 crc kubenswrapper[4869]: I0127 10:14:56.032813 4869 scope.go:117] "RemoveContainer" containerID="8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0" Jan 27 10:14:56 crc kubenswrapper[4869]: E0127 10:14:56.033902 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.161100 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf"] Jan 27 10:15:00 crc kubenswrapper[4869]: E0127 10:15:00.161721 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cd084b-c383-4201-972c-227cedc088a4" containerName="init" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.161736 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cd084b-c383-4201-972c-227cedc088a4" containerName="init" Jan 27 10:15:00 crc kubenswrapper[4869]: E0127 10:15:00.161753 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cd084b-c383-4201-972c-227cedc088a4" containerName="dnsmasq-dns" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.161759 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cd084b-c383-4201-972c-227cedc088a4" containerName="dnsmasq-dns" Jan 27 10:15:00 crc kubenswrapper[4869]: E0127 10:15:00.161772 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6" containerName="init" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.161777 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6" containerName="init" Jan 27 10:15:00 crc kubenswrapper[4869]: E0127 10:15:00.161799 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6" containerName="dnsmasq-dns" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.161806 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6" containerName="dnsmasq-dns" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.161996 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cd084b-c383-4201-972c-227cedc088a4" containerName="dnsmasq-dns" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.162015 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe1cb7a-8b84-46b2-8c33-94c8808d3ff6" containerName="dnsmasq-dns" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.162515 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.165119 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.166181 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.184378 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf"] Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.308557 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/315448d9-e8aa-4644-a3bb-77cc1145fd1e-config-volume\") pod \"collect-profiles-29491815-26dvf\" (UID: \"315448d9-e8aa-4644-a3bb-77cc1145fd1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.308617 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/315448d9-e8aa-4644-a3bb-77cc1145fd1e-secret-volume\") pod \"collect-profiles-29491815-26dvf\" (UID: \"315448d9-e8aa-4644-a3bb-77cc1145fd1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.308704 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f75r\" (UniqueName: \"kubernetes.io/projected/315448d9-e8aa-4644-a3bb-77cc1145fd1e-kube-api-access-9f75r\") pod \"collect-profiles-29491815-26dvf\" (UID: \"315448d9-e8aa-4644-a3bb-77cc1145fd1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.410403 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f75r\" (UniqueName: \"kubernetes.io/projected/315448d9-e8aa-4644-a3bb-77cc1145fd1e-kube-api-access-9f75r\") pod \"collect-profiles-29491815-26dvf\" (UID: \"315448d9-e8aa-4644-a3bb-77cc1145fd1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.410529 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/315448d9-e8aa-4644-a3bb-77cc1145fd1e-config-volume\") pod \"collect-profiles-29491815-26dvf\" (UID: \"315448d9-e8aa-4644-a3bb-77cc1145fd1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.410563 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/315448d9-e8aa-4644-a3bb-77cc1145fd1e-secret-volume\") pod \"collect-profiles-29491815-26dvf\" (UID: \"315448d9-e8aa-4644-a3bb-77cc1145fd1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.411745 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/315448d9-e8aa-4644-a3bb-77cc1145fd1e-config-volume\") pod \"collect-profiles-29491815-26dvf\" (UID: \"315448d9-e8aa-4644-a3bb-77cc1145fd1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.427026 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/315448d9-e8aa-4644-a3bb-77cc1145fd1e-secret-volume\") pod \"collect-profiles-29491815-26dvf\" (UID: \"315448d9-e8aa-4644-a3bb-77cc1145fd1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.430805 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f75r\" (UniqueName: \"kubernetes.io/projected/315448d9-e8aa-4644-a3bb-77cc1145fd1e-kube-api-access-9f75r\") pod \"collect-profiles-29491815-26dvf\" (UID: \"315448d9-e8aa-4644-a3bb-77cc1145fd1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.484050 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf" Jan 27 10:15:00 crc kubenswrapper[4869]: I0127 10:15:00.925772 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf"] Jan 27 10:15:01 crc kubenswrapper[4869]: I0127 10:15:01.033572 4869 scope.go:117] "RemoveContainer" containerID="739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324" Jan 27 10:15:01 crc kubenswrapper[4869]: E0127 10:15:01.033795 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:15:01 crc kubenswrapper[4869]: I0127 10:15:01.351668 4869 generic.go:334] "Generic (PLEG): container finished" podID="315448d9-e8aa-4644-a3bb-77cc1145fd1e" containerID="94a853e17d6d5dca7c9fccb788964acbb23207b24b8a52936d0141c7eb7fd6f3" exitCode=0 Jan 27 10:15:01 crc kubenswrapper[4869]: I0127 10:15:01.351704 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf" event={"ID":"315448d9-e8aa-4644-a3bb-77cc1145fd1e","Type":"ContainerDied","Data":"94a853e17d6d5dca7c9fccb788964acbb23207b24b8a52936d0141c7eb7fd6f3"} Jan 27 10:15:01 crc kubenswrapper[4869]: I0127 10:15:01.351727 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf" event={"ID":"315448d9-e8aa-4644-a3bb-77cc1145fd1e","Type":"ContainerStarted","Data":"b40c0d7c266f443ba37502b4de399f4c0126e462464406c75d3e0a51aa941bb1"} Jan 27 10:15:02 crc kubenswrapper[4869]: I0127 10:15:02.707892 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf" Jan 27 10:15:02 crc kubenswrapper[4869]: I0127 10:15:02.850266 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f75r\" (UniqueName: \"kubernetes.io/projected/315448d9-e8aa-4644-a3bb-77cc1145fd1e-kube-api-access-9f75r\") pod \"315448d9-e8aa-4644-a3bb-77cc1145fd1e\" (UID: \"315448d9-e8aa-4644-a3bb-77cc1145fd1e\") " Jan 27 10:15:02 crc kubenswrapper[4869]: I0127 10:15:02.850410 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/315448d9-e8aa-4644-a3bb-77cc1145fd1e-secret-volume\") pod \"315448d9-e8aa-4644-a3bb-77cc1145fd1e\" (UID: \"315448d9-e8aa-4644-a3bb-77cc1145fd1e\") " Jan 27 10:15:02 crc kubenswrapper[4869]: I0127 10:15:02.850645 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/315448d9-e8aa-4644-a3bb-77cc1145fd1e-config-volume\") pod \"315448d9-e8aa-4644-a3bb-77cc1145fd1e\" (UID: \"315448d9-e8aa-4644-a3bb-77cc1145fd1e\") " Jan 27 10:15:02 crc kubenswrapper[4869]: I0127 10:15:02.851708 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/315448d9-e8aa-4644-a3bb-77cc1145fd1e-config-volume" (OuterVolumeSpecName: "config-volume") pod "315448d9-e8aa-4644-a3bb-77cc1145fd1e" (UID: "315448d9-e8aa-4644-a3bb-77cc1145fd1e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:15:02 crc kubenswrapper[4869]: I0127 10:15:02.852391 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/315448d9-e8aa-4644-a3bb-77cc1145fd1e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 10:15:02 crc kubenswrapper[4869]: I0127 10:15:02.855978 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/315448d9-e8aa-4644-a3bb-77cc1145fd1e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "315448d9-e8aa-4644-a3bb-77cc1145fd1e" (UID: "315448d9-e8aa-4644-a3bb-77cc1145fd1e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:15:02 crc kubenswrapper[4869]: I0127 10:15:02.856032 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/315448d9-e8aa-4644-a3bb-77cc1145fd1e-kube-api-access-9f75r" (OuterVolumeSpecName: "kube-api-access-9f75r") pod "315448d9-e8aa-4644-a3bb-77cc1145fd1e" (UID: "315448d9-e8aa-4644-a3bb-77cc1145fd1e"). InnerVolumeSpecName "kube-api-access-9f75r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:15:02 crc kubenswrapper[4869]: I0127 10:15:02.954330 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/315448d9-e8aa-4644-a3bb-77cc1145fd1e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 10:15:02 crc kubenswrapper[4869]: I0127 10:15:02.954368 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f75r\" (UniqueName: \"kubernetes.io/projected/315448d9-e8aa-4644-a3bb-77cc1145fd1e-kube-api-access-9f75r\") on node \"crc\" DevicePath \"\"" Jan 27 10:15:03 crc kubenswrapper[4869]: I0127 10:15:03.372499 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf" event={"ID":"315448d9-e8aa-4644-a3bb-77cc1145fd1e","Type":"ContainerDied","Data":"b40c0d7c266f443ba37502b4de399f4c0126e462464406c75d3e0a51aa941bb1"} Jan 27 10:15:03 crc kubenswrapper[4869]: I0127 10:15:03.372548 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b40c0d7c266f443ba37502b4de399f4c0126e462464406c75d3e0a51aa941bb1" Jan 27 10:15:03 crc kubenswrapper[4869]: I0127 10:15:03.372568 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491815-26dvf" Jan 27 10:15:10 crc kubenswrapper[4869]: I0127 10:15:10.033944 4869 scope.go:117] "RemoveContainer" containerID="8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0" Jan 27 10:15:10 crc kubenswrapper[4869]: E0127 10:15:10.035065 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:15:14 crc kubenswrapper[4869]: I0127 10:15:14.034230 4869 scope.go:117] "RemoveContainer" containerID="739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324" Jan 27 10:15:14 crc kubenswrapper[4869]: E0127 10:15:14.034876 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:15:24 crc kubenswrapper[4869]: I0127 10:15:24.033863 4869 scope.go:117] "RemoveContainer" containerID="8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0" Jan 27 10:15:24 crc kubenswrapper[4869]: E0127 10:15:24.034793 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:15:29 crc kubenswrapper[4869]: I0127 10:15:29.034280 4869 scope.go:117] "RemoveContainer" containerID="739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324" Jan 27 10:15:29 crc kubenswrapper[4869]: I0127 10:15:29.588657 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerStarted","Data":"a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969"} Jan 27 10:15:29 crc kubenswrapper[4869]: I0127 10:15:29.589401 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:15:33 crc kubenswrapper[4869]: I0127 10:15:33.619113 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" exitCode=0 Jan 27 10:15:33 crc kubenswrapper[4869]: I0127 10:15:33.619196 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerDied","Data":"a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969"} Jan 27 10:15:33 crc kubenswrapper[4869]: I0127 10:15:33.619457 4869 scope.go:117] "RemoveContainer" containerID="739ececcb3ebc4add504fdf3bbe09cae62e0e840ebcdc8d9ffc99794ba9ce324" Jan 27 10:15:33 crc kubenswrapper[4869]: I0127 10:15:33.620199 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:15:33 crc kubenswrapper[4869]: E0127 10:15:33.620448 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:15:36 crc kubenswrapper[4869]: I0127 10:15:36.034536 4869 scope.go:117] "RemoveContainer" containerID="8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0" Jan 27 10:15:36 crc kubenswrapper[4869]: I0127 10:15:36.648129 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerStarted","Data":"46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4"} Jan 27 10:15:36 crc kubenswrapper[4869]: I0127 10:15:36.648372 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 10:15:40 crc kubenswrapper[4869]: I0127 10:15:40.691586 4869 generic.go:334] "Generic (PLEG): container finished" podID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" exitCode=0 Jan 27 10:15:40 crc kubenswrapper[4869]: I0127 10:15:40.692080 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerDied","Data":"46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4"} Jan 27 10:15:40 crc kubenswrapper[4869]: I0127 10:15:40.692114 4869 scope.go:117] "RemoveContainer" containerID="8c7f03e95d2a9276b4ed1321026af83e9508632dd20e4f60a8e76254a09ed5c0" Jan 27 10:15:40 crc kubenswrapper[4869]: I0127 10:15:40.692600 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:15:40 crc kubenswrapper[4869]: E0127 10:15:40.692786 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:15:45 crc kubenswrapper[4869]: I0127 10:15:45.697735 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:15:45 crc kubenswrapper[4869]: I0127 10:15:45.700952 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:15:47 crc kubenswrapper[4869]: I0127 10:15:47.032941 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:15:47 crc kubenswrapper[4869]: E0127 10:15:47.033446 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:15:52 crc kubenswrapper[4869]: I0127 10:15:52.040692 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:15:52 crc kubenswrapper[4869]: E0127 10:15:52.041323 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:15:53 crc kubenswrapper[4869]: I0127 10:15:53.370464 4869 scope.go:117] "RemoveContainer" containerID="5acb071bf278e773630a67ba0b747812bd3d2f99ab4e4975c643e04f9c9da920" Jan 27 10:15:53 crc kubenswrapper[4869]: I0127 10:15:53.401438 4869 scope.go:117] "RemoveContainer" containerID="59fe7b1d2f8e7896a7a069dc92029c6baf0a2add719dab0bc745bfd8b386e066" Jan 27 10:15:53 crc kubenswrapper[4869]: I0127 10:15:53.464161 4869 scope.go:117] "RemoveContainer" containerID="20b3dda39de376453986365ef05a1863ba7f4c0cbf7917a8a7915c2b654753b8" Jan 27 10:16:00 crc kubenswrapper[4869]: I0127 10:16:00.033431 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:16:00 crc kubenswrapper[4869]: E0127 10:16:00.034218 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:16:04 crc kubenswrapper[4869]: I0127 10:16:04.036907 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:16:04 crc kubenswrapper[4869]: E0127 10:16:04.037558 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:16:14 crc kubenswrapper[4869]: I0127 10:16:14.034513 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:16:14 crc kubenswrapper[4869]: E0127 10:16:14.035497 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:16:15 crc kubenswrapper[4869]: I0127 10:16:15.697280 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:16:15 crc kubenswrapper[4869]: I0127 10:16:15.697558 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:16:17 crc kubenswrapper[4869]: I0127 10:16:17.033238 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:16:17 crc kubenswrapper[4869]: E0127 10:16:17.033472 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:16:25 crc kubenswrapper[4869]: I0127 10:16:25.033808 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:16:25 crc kubenswrapper[4869]: E0127 10:16:25.034585 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:16:29 crc kubenswrapper[4869]: I0127 10:16:29.033146 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:16:29 crc kubenswrapper[4869]: E0127 10:16:29.033659 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:16:37 crc kubenswrapper[4869]: I0127 10:16:37.032744 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:16:37 crc kubenswrapper[4869]: E0127 10:16:37.033310 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:16:44 crc kubenswrapper[4869]: I0127 10:16:44.033712 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:16:44 crc kubenswrapper[4869]: E0127 10:16:44.034655 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:16:45 crc kubenswrapper[4869]: I0127 10:16:45.698068 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:16:45 crc kubenswrapper[4869]: I0127 10:16:45.698352 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:16:45 crc kubenswrapper[4869]: I0127 10:16:45.698434 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 10:16:45 crc kubenswrapper[4869]: I0127 10:16:45.698972 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e42a767d7d8267d9715f57b3658f53bd93ed0ffa29874755bfedd19bddd1072d"} pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:16:45 crc kubenswrapper[4869]: I0127 10:16:45.699033 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" containerID="cri-o://e42a767d7d8267d9715f57b3658f53bd93ed0ffa29874755bfedd19bddd1072d" gracePeriod=600 Jan 27 10:16:46 crc kubenswrapper[4869]: I0127 10:16:46.209157 4869 generic.go:334] "Generic (PLEG): container finished" podID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerID="e42a767d7d8267d9715f57b3658f53bd93ed0ffa29874755bfedd19bddd1072d" exitCode=0 Jan 27 10:16:46 crc kubenswrapper[4869]: I0127 10:16:46.209592 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerDied","Data":"e42a767d7d8267d9715f57b3658f53bd93ed0ffa29874755bfedd19bddd1072d"} Jan 27 10:16:46 crc kubenswrapper[4869]: I0127 10:16:46.209627 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerStarted","Data":"cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4"} Jan 27 10:16:46 crc kubenswrapper[4869]: I0127 10:16:46.209657 4869 scope.go:117] "RemoveContainer" containerID="1b72347200950347d222694240cf88dda5067f82f3f49e7890c07c595718e823" Jan 27 10:16:50 crc kubenswrapper[4869]: I0127 10:16:50.034431 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:16:50 crc kubenswrapper[4869]: E0127 10:16:50.035636 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:16:56 crc kubenswrapper[4869]: I0127 10:16:56.033559 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:16:56 crc kubenswrapper[4869]: E0127 10:16:56.034434 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:17:02 crc kubenswrapper[4869]: I0127 10:17:02.039309 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:17:02 crc kubenswrapper[4869]: E0127 10:17:02.040109 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:17:11 crc kubenswrapper[4869]: I0127 10:17:11.032975 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:17:11 crc kubenswrapper[4869]: E0127 10:17:11.033638 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:17:15 crc kubenswrapper[4869]: I0127 10:17:15.034196 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:17:15 crc kubenswrapper[4869]: E0127 10:17:15.035135 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:17:23 crc kubenswrapper[4869]: I0127 10:17:23.033712 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:17:23 crc kubenswrapper[4869]: E0127 10:17:23.034576 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:17:30 crc kubenswrapper[4869]: I0127 10:17:30.034121 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:17:30 crc kubenswrapper[4869]: E0127 10:17:30.034548 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:17:36 crc kubenswrapper[4869]: I0127 10:17:36.034023 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:17:36 crc kubenswrapper[4869]: E0127 10:17:36.034776 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:17:44 crc kubenswrapper[4869]: I0127 10:17:44.033739 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:17:44 crc kubenswrapper[4869]: E0127 10:17:44.034651 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:17:49 crc kubenswrapper[4869]: I0127 10:17:49.033196 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:17:49 crc kubenswrapper[4869]: E0127 10:17:49.034082 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:17:58 crc kubenswrapper[4869]: I0127 10:17:58.034257 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:17:58 crc kubenswrapper[4869]: E0127 10:17:58.035101 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:18:01 crc kubenswrapper[4869]: I0127 10:18:01.033755 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:18:01 crc kubenswrapper[4869]: E0127 10:18:01.033993 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:18:12 crc kubenswrapper[4869]: I0127 10:18:12.037769 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:18:12 crc kubenswrapper[4869]: E0127 10:18:12.039310 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:18:15 crc kubenswrapper[4869]: I0127 10:18:15.033778 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:18:15 crc kubenswrapper[4869]: E0127 10:18:15.034660 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:18:25 crc kubenswrapper[4869]: I0127 10:18:25.033685 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:18:25 crc kubenswrapper[4869]: E0127 10:18:25.034284 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:18:30 crc kubenswrapper[4869]: I0127 10:18:30.033347 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:18:30 crc kubenswrapper[4869]: E0127 10:18:30.033931 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:18:38 crc kubenswrapper[4869]: I0127 10:18:38.032909 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:18:38 crc kubenswrapper[4869]: E0127 10:18:38.033712 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:18:41 crc kubenswrapper[4869]: I0127 10:18:41.033062 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:18:41 crc kubenswrapper[4869]: E0127 10:18:41.033531 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:18:45 crc kubenswrapper[4869]: I0127 10:18:45.697562 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:18:45 crc kubenswrapper[4869]: I0127 10:18:45.698098 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:18:53 crc kubenswrapper[4869]: I0127 10:18:53.033973 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:18:53 crc kubenswrapper[4869]: E0127 10:18:53.034589 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:18:53 crc kubenswrapper[4869]: I0127 10:18:53.034608 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:18:53 crc kubenswrapper[4869]: E0127 10:18:53.034804 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:19:06 crc kubenswrapper[4869]: I0127 10:19:06.033010 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:19:06 crc kubenswrapper[4869]: E0127 10:19:06.033822 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:19:07 crc kubenswrapper[4869]: I0127 10:19:07.033373 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:19:07 crc kubenswrapper[4869]: E0127 10:19:07.033835 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:19:15 crc kubenswrapper[4869]: I0127 10:19:15.697891 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:19:15 crc kubenswrapper[4869]: I0127 10:19:15.698341 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:19:19 crc kubenswrapper[4869]: I0127 10:19:19.033230 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:19:19 crc kubenswrapper[4869]: E0127 10:19:19.034036 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:19:21 crc kubenswrapper[4869]: I0127 10:19:21.033068 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:19:21 crc kubenswrapper[4869]: E0127 10:19:21.033474 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:19:30 crc kubenswrapper[4869]: I0127 10:19:30.049415 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-dc7e-account-create-update-dsvtj"] Jan 27 10:19:30 crc kubenswrapper[4869]: I0127 10:19:30.056670 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-wwrvj"] Jan 27 10:19:30 crc kubenswrapper[4869]: I0127 10:19:30.063208 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-dc7e-account-create-update-dsvtj"] Jan 27 10:19:30 crc kubenswrapper[4869]: I0127 10:19:30.070505 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-wwrvj"] Jan 27 10:19:31 crc kubenswrapper[4869]: I0127 10:19:31.047328 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-4dqbq"] Jan 27 10:19:31 crc kubenswrapper[4869]: I0127 10:19:31.058242 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3ac0-account-create-update-s57tn"] Jan 27 10:19:31 crc kubenswrapper[4869]: I0127 10:19:31.070778 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3ac0-account-create-update-s57tn"] Jan 27 10:19:31 crc kubenswrapper[4869]: I0127 10:19:31.082611 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-4dqbq"] Jan 27 10:19:31 crc kubenswrapper[4869]: I0127 10:19:31.086891 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-cx2rm"] Jan 27 10:19:31 crc kubenswrapper[4869]: I0127 10:19:31.093518 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-c2a7-account-create-update-7z2sg"] Jan 27 10:19:31 crc kubenswrapper[4869]: I0127 10:19:31.098747 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-c2a7-account-create-update-7z2sg"] Jan 27 10:19:31 crc kubenswrapper[4869]: I0127 10:19:31.104964 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-cx2rm"] Jan 27 10:19:32 crc kubenswrapper[4869]: I0127 10:19:32.038480 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:19:32 crc kubenswrapper[4869]: E0127 10:19:32.038929 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:19:32 crc kubenswrapper[4869]: I0127 10:19:32.043254 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="110c6611-8d0d-4f46-94a3-eab1a21743e9" path="/var/lib/kubelet/pods/110c6611-8d0d-4f46-94a3-eab1a21743e9/volumes" Jan 27 10:19:32 crc kubenswrapper[4869]: I0127 10:19:32.044252 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d604b62-cdb4-4227-997f-defd9a3ca643" path="/var/lib/kubelet/pods/1d604b62-cdb4-4227-997f-defd9a3ca643/volumes" Jan 27 10:19:32 crc kubenswrapper[4869]: I0127 10:19:32.044951 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ada9983-506d-4de8-9d7d-8f7fc1bcb50f" path="/var/lib/kubelet/pods/5ada9983-506d-4de8-9d7d-8f7fc1bcb50f/volumes" Jan 27 10:19:32 crc kubenswrapper[4869]: I0127 10:19:32.045667 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ec835b0-a5a2-4a65-ad57-8282ba92fc1c" path="/var/lib/kubelet/pods/9ec835b0-a5a2-4a65-ad57-8282ba92fc1c/volumes" Jan 27 10:19:32 crc kubenswrapper[4869]: I0127 10:19:32.047060 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b314971b-a6d0-4364-9753-480190c2ef5c" path="/var/lib/kubelet/pods/b314971b-a6d0-4364-9753-480190c2ef5c/volumes" Jan 27 10:19:32 crc kubenswrapper[4869]: I0127 10:19:32.047714 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f6337c-88cf-4544-b1f8-082325ebd6db" path="/var/lib/kubelet/pods/c5f6337c-88cf-4544-b1f8-082325ebd6db/volumes" Jan 27 10:19:34 crc kubenswrapper[4869]: I0127 10:19:34.033449 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:19:34 crc kubenswrapper[4869]: E0127 10:19:34.033801 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:19:40 crc kubenswrapper[4869]: I0127 10:19:40.028539 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-sl8h2"] Jan 27 10:19:40 crc kubenswrapper[4869]: I0127 10:19:40.043235 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-sl8h2"] Jan 27 10:19:42 crc kubenswrapper[4869]: I0127 10:19:42.042329 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b4f7c0-c984-455c-a928-9ebf3243bfe8" path="/var/lib/kubelet/pods/34b4f7c0-c984-455c-a928-9ebf3243bfe8/volumes" Jan 27 10:19:45 crc kubenswrapper[4869]: I0127 10:19:45.033730 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:19:45 crc kubenswrapper[4869]: E0127 10:19:45.034206 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:19:45 crc kubenswrapper[4869]: I0127 10:19:45.697755 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:19:45 crc kubenswrapper[4869]: I0127 10:19:45.697811 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:19:45 crc kubenswrapper[4869]: I0127 10:19:45.697861 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 10:19:45 crc kubenswrapper[4869]: I0127 10:19:45.698682 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4"} pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:19:45 crc kubenswrapper[4869]: I0127 10:19:45.698765 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" containerID="cri-o://cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" gracePeriod=600 Jan 27 10:19:46 crc kubenswrapper[4869]: I0127 10:19:46.033790 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:19:46 crc kubenswrapper[4869]: E0127 10:19:46.034368 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:19:46 crc kubenswrapper[4869]: E0127 10:19:46.387349 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:19:46 crc kubenswrapper[4869]: I0127 10:19:46.452224 4869 generic.go:334] "Generic (PLEG): container finished" podID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" exitCode=0 Jan 27 10:19:46 crc kubenswrapper[4869]: I0127 10:19:46.452259 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerDied","Data":"cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4"} Jan 27 10:19:46 crc kubenswrapper[4869]: I0127 10:19:46.452322 4869 scope.go:117] "RemoveContainer" containerID="e42a767d7d8267d9715f57b3658f53bd93ed0ffa29874755bfedd19bddd1072d" Jan 27 10:19:46 crc kubenswrapper[4869]: I0127 10:19:46.454173 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:19:46 crc kubenswrapper[4869]: E0127 10:19:46.455117 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:19:53 crc kubenswrapper[4869]: I0127 10:19:53.617317 4869 scope.go:117] "RemoveContainer" containerID="b61a1962a0627adec90528afcb14548e6facb43632ff91879ca76fa5329e37a9" Jan 27 10:19:53 crc kubenswrapper[4869]: I0127 10:19:53.642217 4869 scope.go:117] "RemoveContainer" containerID="113b17db5d8d3328a1cbec18d180ea1f651386583fff61080229d02f8deca43e" Jan 27 10:19:53 crc kubenswrapper[4869]: I0127 10:19:53.674465 4869 scope.go:117] "RemoveContainer" containerID="62950fcb8b68056e29a8d461929c7582581eb2f112fd6a4fe82a3513ffc4e8b1" Jan 27 10:19:53 crc kubenswrapper[4869]: I0127 10:19:53.707820 4869 scope.go:117] "RemoveContainer" containerID="e239c2839d03971d82950ff237289596a2d933c8b9cb5698fb7440e4ec1e4993" Jan 27 10:19:53 crc kubenswrapper[4869]: I0127 10:19:53.749448 4869 scope.go:117] "RemoveContainer" containerID="6f9ba9e17522fd0c616709bd075a2b72dc9586262694908705bc5949b458c1db" Jan 27 10:19:53 crc kubenswrapper[4869]: I0127 10:19:53.776535 4869 scope.go:117] "RemoveContainer" containerID="5ff0694b17b24b23a78b92a14b7f964c9d85d21f09005a4787f65cccf15f6c2c" Jan 27 10:19:53 crc kubenswrapper[4869]: I0127 10:19:53.808746 4869 scope.go:117] "RemoveContainer" containerID="5d5e4ee6fc95efc75e499a0e28e6e1c362385f8579a8d8abf1119385e3244f3f" Jan 27 10:19:58 crc kubenswrapper[4869]: I0127 10:19:58.033187 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:19:58 crc kubenswrapper[4869]: E0127 10:19:58.033889 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:19:58 crc kubenswrapper[4869]: I0127 10:19:58.034098 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:19:58 crc kubenswrapper[4869]: E0127 10:19:58.034589 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:19:58 crc kubenswrapper[4869]: I0127 10:19:58.048516 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-b9v6v"] Jan 27 10:19:58 crc kubenswrapper[4869]: I0127 10:19:58.056665 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-b9v6v"] Jan 27 10:19:59 crc kubenswrapper[4869]: I0127 10:19:59.033621 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:19:59 crc kubenswrapper[4869]: E0127 10:19:59.033872 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:19:59 crc kubenswrapper[4869]: I0127 10:19:59.115342 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jhz64"] Jan 27 10:19:59 crc kubenswrapper[4869]: E0127 10:19:59.115666 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315448d9-e8aa-4644-a3bb-77cc1145fd1e" containerName="collect-profiles" Jan 27 10:19:59 crc kubenswrapper[4869]: I0127 10:19:59.115678 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="315448d9-e8aa-4644-a3bb-77cc1145fd1e" containerName="collect-profiles" Jan 27 10:19:59 crc kubenswrapper[4869]: I0127 10:19:59.115849 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="315448d9-e8aa-4644-a3bb-77cc1145fd1e" containerName="collect-profiles" Jan 27 10:19:59 crc kubenswrapper[4869]: I0127 10:19:59.116906 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhz64" Jan 27 10:19:59 crc kubenswrapper[4869]: I0127 10:19:59.133948 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jhz64"] Jan 27 10:19:59 crc kubenswrapper[4869]: I0127 10:19:59.163878 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-utilities\") pod \"redhat-operators-jhz64\" (UID: \"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7\") " pod="openshift-marketplace/redhat-operators-jhz64" Jan 27 10:19:59 crc kubenswrapper[4869]: I0127 10:19:59.163938 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-catalog-content\") pod \"redhat-operators-jhz64\" (UID: \"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7\") " pod="openshift-marketplace/redhat-operators-jhz64" Jan 27 10:19:59 crc kubenswrapper[4869]: I0127 10:19:59.164046 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9fbt\" (UniqueName: \"kubernetes.io/projected/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-kube-api-access-s9fbt\") pod \"redhat-operators-jhz64\" (UID: \"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7\") " pod="openshift-marketplace/redhat-operators-jhz64" Jan 27 10:19:59 crc kubenswrapper[4869]: I0127 10:19:59.264936 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-utilities\") pod \"redhat-operators-jhz64\" (UID: \"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7\") " pod="openshift-marketplace/redhat-operators-jhz64" Jan 27 10:19:59 crc kubenswrapper[4869]: I0127 10:19:59.264988 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-catalog-content\") pod \"redhat-operators-jhz64\" (UID: \"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7\") " pod="openshift-marketplace/redhat-operators-jhz64" Jan 27 10:19:59 crc kubenswrapper[4869]: I0127 10:19:59.265052 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9fbt\" (UniqueName: \"kubernetes.io/projected/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-kube-api-access-s9fbt\") pod \"redhat-operators-jhz64\" (UID: \"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7\") " pod="openshift-marketplace/redhat-operators-jhz64" Jan 27 10:19:59 crc kubenswrapper[4869]: I0127 10:19:59.265473 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-utilities\") pod \"redhat-operators-jhz64\" (UID: \"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7\") " pod="openshift-marketplace/redhat-operators-jhz64" Jan 27 10:19:59 crc kubenswrapper[4869]: I0127 10:19:59.265556 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-catalog-content\") pod \"redhat-operators-jhz64\" (UID: \"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7\") " pod="openshift-marketplace/redhat-operators-jhz64" Jan 27 10:19:59 crc kubenswrapper[4869]: I0127 10:19:59.285889 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9fbt\" (UniqueName: \"kubernetes.io/projected/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-kube-api-access-s9fbt\") pod \"redhat-operators-jhz64\" (UID: \"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7\") " pod="openshift-marketplace/redhat-operators-jhz64" Jan 27 10:19:59 crc kubenswrapper[4869]: I0127 10:19:59.437284 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhz64" Jan 27 10:19:59 crc kubenswrapper[4869]: I0127 10:19:59.908778 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jhz64"] Jan 27 10:20:00 crc kubenswrapper[4869]: I0127 10:20:00.044761 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97d81ee5-695a-463d-8e02-30d6abcc13c3" path="/var/lib/kubelet/pods/97d81ee5-695a-463d-8e02-30d6abcc13c3/volumes" Jan 27 10:20:00 crc kubenswrapper[4869]: I0127 10:20:00.576774 4869 generic.go:334] "Generic (PLEG): container finished" podID="693d7bac-2ac2-4031-b8c2-1ed27ded1fb7" containerID="2a788f942960f9e99369cd9f65c12e49f80d94ad3deb267c52f9dbc519582d7f" exitCode=0 Jan 27 10:20:00 crc kubenswrapper[4869]: I0127 10:20:00.576873 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhz64" event={"ID":"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7","Type":"ContainerDied","Data":"2a788f942960f9e99369cd9f65c12e49f80d94ad3deb267c52f9dbc519582d7f"} Jan 27 10:20:00 crc kubenswrapper[4869]: I0127 10:20:00.577125 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhz64" event={"ID":"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7","Type":"ContainerStarted","Data":"3c0dcf873961a07c4d478bfdb396150ba63a7d0fb382f18d208c581317d52036"} Jan 27 10:20:00 crc kubenswrapper[4869]: I0127 10:20:00.579435 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 10:20:01 crc kubenswrapper[4869]: I0127 10:20:01.607479 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhz64" event={"ID":"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7","Type":"ContainerStarted","Data":"5a063489c14d2b40df008e4e2f61359bd8db9fbe378edc47ae5167941e00932c"} Jan 27 10:20:02 crc kubenswrapper[4869]: I0127 10:20:02.615317 4869 generic.go:334] "Generic (PLEG): container finished" podID="693d7bac-2ac2-4031-b8c2-1ed27ded1fb7" containerID="5a063489c14d2b40df008e4e2f61359bd8db9fbe378edc47ae5167941e00932c" exitCode=0 Jan 27 10:20:02 crc kubenswrapper[4869]: I0127 10:20:02.615358 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhz64" event={"ID":"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7","Type":"ContainerDied","Data":"5a063489c14d2b40df008e4e2f61359bd8db9fbe378edc47ae5167941e00932c"} Jan 27 10:20:04 crc kubenswrapper[4869]: I0127 10:20:04.631509 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhz64" event={"ID":"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7","Type":"ContainerStarted","Data":"2b2af4081fcab0ffb04cdcdb8ffcd9d1dcebf0b271719309ef7808767790a7dd"} Jan 27 10:20:04 crc kubenswrapper[4869]: I0127 10:20:04.658896 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jhz64" podStartSLOduration=2.786157989 podStartE2EDuration="5.658873783s" podCreationTimestamp="2026-01-27 10:19:59 +0000 UTC" firstStartedPulling="2026-01-27 10:20:00.579174359 +0000 UTC m=+1569.199598452" lastFinishedPulling="2026-01-27 10:20:03.451890163 +0000 UTC m=+1572.072314246" observedRunningTime="2026-01-27 10:20:04.652912208 +0000 UTC m=+1573.273336301" watchObservedRunningTime="2026-01-27 10:20:04.658873783 +0000 UTC m=+1573.279297866" Jan 27 10:20:09 crc kubenswrapper[4869]: I0127 10:20:09.401038 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c4grt"] Jan 27 10:20:09 crc kubenswrapper[4869]: I0127 10:20:09.403596 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4grt" Jan 27 10:20:09 crc kubenswrapper[4869]: I0127 10:20:09.416989 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4grt"] Jan 27 10:20:09 crc kubenswrapper[4869]: I0127 10:20:09.455579 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jhz64" Jan 27 10:20:09 crc kubenswrapper[4869]: I0127 10:20:09.455985 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jhz64" Jan 27 10:20:09 crc kubenswrapper[4869]: I0127 10:20:09.458263 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e70b21b-1833-4078-b2ce-3c2ace700666-utilities\") pod \"community-operators-c4grt\" (UID: \"0e70b21b-1833-4078-b2ce-3c2ace700666\") " pod="openshift-marketplace/community-operators-c4grt" Jan 27 10:20:09 crc kubenswrapper[4869]: I0127 10:20:09.458470 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dwcp\" (UniqueName: \"kubernetes.io/projected/0e70b21b-1833-4078-b2ce-3c2ace700666-kube-api-access-4dwcp\") pod \"community-operators-c4grt\" (UID: \"0e70b21b-1833-4078-b2ce-3c2ace700666\") " pod="openshift-marketplace/community-operators-c4grt" Jan 27 10:20:09 crc kubenswrapper[4869]: I0127 10:20:09.458580 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e70b21b-1833-4078-b2ce-3c2ace700666-catalog-content\") pod \"community-operators-c4grt\" (UID: \"0e70b21b-1833-4078-b2ce-3c2ace700666\") " pod="openshift-marketplace/community-operators-c4grt" Jan 27 10:20:09 crc kubenswrapper[4869]: I0127 10:20:09.559690 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dwcp\" (UniqueName: \"kubernetes.io/projected/0e70b21b-1833-4078-b2ce-3c2ace700666-kube-api-access-4dwcp\") pod \"community-operators-c4grt\" (UID: \"0e70b21b-1833-4078-b2ce-3c2ace700666\") " pod="openshift-marketplace/community-operators-c4grt" Jan 27 10:20:09 crc kubenswrapper[4869]: I0127 10:20:09.560088 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e70b21b-1833-4078-b2ce-3c2ace700666-catalog-content\") pod \"community-operators-c4grt\" (UID: \"0e70b21b-1833-4078-b2ce-3c2ace700666\") " pod="openshift-marketplace/community-operators-c4grt" Jan 27 10:20:09 crc kubenswrapper[4869]: I0127 10:20:09.560145 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e70b21b-1833-4078-b2ce-3c2ace700666-utilities\") pod \"community-operators-c4grt\" (UID: \"0e70b21b-1833-4078-b2ce-3c2ace700666\") " pod="openshift-marketplace/community-operators-c4grt" Jan 27 10:20:09 crc kubenswrapper[4869]: I0127 10:20:09.560638 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e70b21b-1833-4078-b2ce-3c2ace700666-catalog-content\") pod \"community-operators-c4grt\" (UID: \"0e70b21b-1833-4078-b2ce-3c2ace700666\") " pod="openshift-marketplace/community-operators-c4grt" Jan 27 10:20:09 crc kubenswrapper[4869]: I0127 10:20:09.560665 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e70b21b-1833-4078-b2ce-3c2ace700666-utilities\") pod \"community-operators-c4grt\" (UID: \"0e70b21b-1833-4078-b2ce-3c2ace700666\") " pod="openshift-marketplace/community-operators-c4grt" Jan 27 10:20:09 crc kubenswrapper[4869]: I0127 10:20:09.580408 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dwcp\" (UniqueName: \"kubernetes.io/projected/0e70b21b-1833-4078-b2ce-3c2ace700666-kube-api-access-4dwcp\") pod \"community-operators-c4grt\" (UID: \"0e70b21b-1833-4078-b2ce-3c2ace700666\") " pod="openshift-marketplace/community-operators-c4grt" Jan 27 10:20:09 crc kubenswrapper[4869]: I0127 10:20:09.763646 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4grt" Jan 27 10:20:10 crc kubenswrapper[4869]: I0127 10:20:10.033165 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:20:10 crc kubenswrapper[4869]: I0127 10:20:10.033541 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:20:10 crc kubenswrapper[4869]: E0127 10:20:10.033696 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:20:10 crc kubenswrapper[4869]: E0127 10:20:10.033753 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:20:10 crc kubenswrapper[4869]: I0127 10:20:10.267406 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4grt"] Jan 27 10:20:10 crc kubenswrapper[4869]: I0127 10:20:10.512705 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jhz64" podUID="693d7bac-2ac2-4031-b8c2-1ed27ded1fb7" containerName="registry-server" probeResult="failure" output=< Jan 27 10:20:10 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Jan 27 10:20:10 crc kubenswrapper[4869]: > Jan 27 10:20:10 crc kubenswrapper[4869]: I0127 10:20:10.674631 4869 generic.go:334] "Generic (PLEG): container finished" podID="0e70b21b-1833-4078-b2ce-3c2ace700666" containerID="03ad18b5b54b99b32d56f27ffbdcf1c333b8d8ef38b83ca44576a4dbb2f5c1c5" exitCode=0 Jan 27 10:20:10 crc kubenswrapper[4869]: I0127 10:20:10.674676 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4grt" event={"ID":"0e70b21b-1833-4078-b2ce-3c2ace700666","Type":"ContainerDied","Data":"03ad18b5b54b99b32d56f27ffbdcf1c333b8d8ef38b83ca44576a4dbb2f5c1c5"} Jan 27 10:20:10 crc kubenswrapper[4869]: I0127 10:20:10.674702 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4grt" event={"ID":"0e70b21b-1833-4078-b2ce-3c2ace700666","Type":"ContainerStarted","Data":"85804694968e03d6a6f5636b7bf835a79f8231a9f28522b60a1928482b1e80c8"} Jan 27 10:20:11 crc kubenswrapper[4869]: I0127 10:20:11.684072 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4grt" event={"ID":"0e70b21b-1833-4078-b2ce-3c2ace700666","Type":"ContainerStarted","Data":"de2c3ace15c0236f8d549d2020a7257a5e50175412b84b00148fe628883d4888"} Jan 27 10:20:12 crc kubenswrapper[4869]: I0127 10:20:12.038096 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:20:12 crc kubenswrapper[4869]: E0127 10:20:12.038587 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:20:12 crc kubenswrapper[4869]: I0127 10:20:12.694075 4869 generic.go:334] "Generic (PLEG): container finished" podID="0e70b21b-1833-4078-b2ce-3c2ace700666" containerID="de2c3ace15c0236f8d549d2020a7257a5e50175412b84b00148fe628883d4888" exitCode=0 Jan 27 10:20:12 crc kubenswrapper[4869]: I0127 10:20:12.694140 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4grt" event={"ID":"0e70b21b-1833-4078-b2ce-3c2ace700666","Type":"ContainerDied","Data":"de2c3ace15c0236f8d549d2020a7257a5e50175412b84b00148fe628883d4888"} Jan 27 10:20:13 crc kubenswrapper[4869]: I0127 10:20:13.702564 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4grt" event={"ID":"0e70b21b-1833-4078-b2ce-3c2ace700666","Type":"ContainerStarted","Data":"2af940a67c1ffab4b7ee43796e304345ac658f8f10e063b431cba49601f28a81"} Jan 27 10:20:13 crc kubenswrapper[4869]: I0127 10:20:13.720960 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c4grt" podStartSLOduration=2.236058637 podStartE2EDuration="4.720941736s" podCreationTimestamp="2026-01-27 10:20:09 +0000 UTC" firstStartedPulling="2026-01-27 10:20:10.6768937 +0000 UTC m=+1579.297317793" lastFinishedPulling="2026-01-27 10:20:13.161776809 +0000 UTC m=+1581.782200892" observedRunningTime="2026-01-27 10:20:13.719012725 +0000 UTC m=+1582.339436818" watchObservedRunningTime="2026-01-27 10:20:13.720941736 +0000 UTC m=+1582.341365819" Jan 27 10:20:19 crc kubenswrapper[4869]: I0127 10:20:19.484937 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jhz64" Jan 27 10:20:19 crc kubenswrapper[4869]: I0127 10:20:19.529163 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jhz64" Jan 27 10:20:19 crc kubenswrapper[4869]: I0127 10:20:19.723005 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jhz64"] Jan 27 10:20:19 crc kubenswrapper[4869]: I0127 10:20:19.763942 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c4grt" Jan 27 10:20:19 crc kubenswrapper[4869]: I0127 10:20:19.764001 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c4grt" Jan 27 10:20:19 crc kubenswrapper[4869]: I0127 10:20:19.804386 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c4grt" Jan 27 10:20:20 crc kubenswrapper[4869]: I0127 10:20:20.750051 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jhz64" podUID="693d7bac-2ac2-4031-b8c2-1ed27ded1fb7" containerName="registry-server" containerID="cri-o://2b2af4081fcab0ffb04cdcdb8ffcd9d1dcebf0b271719309ef7808767790a7dd" gracePeriod=2 Jan 27 10:20:20 crc kubenswrapper[4869]: I0127 10:20:20.796643 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c4grt" Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.230218 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhz64" Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.284001 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-catalog-content\") pod \"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7\" (UID: \"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7\") " Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.285051 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9fbt\" (UniqueName: \"kubernetes.io/projected/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-kube-api-access-s9fbt\") pod \"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7\" (UID: \"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7\") " Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.285088 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-utilities\") pod \"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7\" (UID: \"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7\") " Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.286075 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-utilities" (OuterVolumeSpecName: "utilities") pod "693d7bac-2ac2-4031-b8c2-1ed27ded1fb7" (UID: "693d7bac-2ac2-4031-b8c2-1ed27ded1fb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.297341 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-kube-api-access-s9fbt" (OuterVolumeSpecName: "kube-api-access-s9fbt") pod "693d7bac-2ac2-4031-b8c2-1ed27ded1fb7" (UID: "693d7bac-2ac2-4031-b8c2-1ed27ded1fb7"). InnerVolumeSpecName "kube-api-access-s9fbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.387166 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.387206 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9fbt\" (UniqueName: \"kubernetes.io/projected/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-kube-api-access-s9fbt\") on node \"crc\" DevicePath \"\"" Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.398312 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "693d7bac-2ac2-4031-b8c2-1ed27ded1fb7" (UID: "693d7bac-2ac2-4031-b8c2-1ed27ded1fb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.490006 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.761016 4869 generic.go:334] "Generic (PLEG): container finished" podID="693d7bac-2ac2-4031-b8c2-1ed27ded1fb7" containerID="2b2af4081fcab0ffb04cdcdb8ffcd9d1dcebf0b271719309ef7808767790a7dd" exitCode=0 Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.761853 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhz64" Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.773366 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhz64" event={"ID":"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7","Type":"ContainerDied","Data":"2b2af4081fcab0ffb04cdcdb8ffcd9d1dcebf0b271719309ef7808767790a7dd"} Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.773667 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhz64" event={"ID":"693d7bac-2ac2-4031-b8c2-1ed27ded1fb7","Type":"ContainerDied","Data":"3c0dcf873961a07c4d478bfdb396150ba63a7d0fb382f18d208c581317d52036"} Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.773806 4869 scope.go:117] "RemoveContainer" containerID="2b2af4081fcab0ffb04cdcdb8ffcd9d1dcebf0b271719309ef7808767790a7dd" Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.803166 4869 scope.go:117] "RemoveContainer" containerID="5a063489c14d2b40df008e4e2f61359bd8db9fbe378edc47ae5167941e00932c" Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.810227 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jhz64"] Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.820222 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jhz64"] Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.839395 4869 scope.go:117] "RemoveContainer" containerID="2a788f942960f9e99369cd9f65c12e49f80d94ad3deb267c52f9dbc519582d7f" Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.880028 4869 scope.go:117] "RemoveContainer" containerID="2b2af4081fcab0ffb04cdcdb8ffcd9d1dcebf0b271719309ef7808767790a7dd" Jan 27 10:20:21 crc kubenswrapper[4869]: E0127 10:20:21.880522 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b2af4081fcab0ffb04cdcdb8ffcd9d1dcebf0b271719309ef7808767790a7dd\": container with ID starting with 2b2af4081fcab0ffb04cdcdb8ffcd9d1dcebf0b271719309ef7808767790a7dd not found: ID does not exist" containerID="2b2af4081fcab0ffb04cdcdb8ffcd9d1dcebf0b271719309ef7808767790a7dd" Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.880563 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b2af4081fcab0ffb04cdcdb8ffcd9d1dcebf0b271719309ef7808767790a7dd"} err="failed to get container status \"2b2af4081fcab0ffb04cdcdb8ffcd9d1dcebf0b271719309ef7808767790a7dd\": rpc error: code = NotFound desc = could not find container \"2b2af4081fcab0ffb04cdcdb8ffcd9d1dcebf0b271719309ef7808767790a7dd\": container with ID starting with 2b2af4081fcab0ffb04cdcdb8ffcd9d1dcebf0b271719309ef7808767790a7dd not found: ID does not exist" Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.880590 4869 scope.go:117] "RemoveContainer" containerID="5a063489c14d2b40df008e4e2f61359bd8db9fbe378edc47ae5167941e00932c" Jan 27 10:20:21 crc kubenswrapper[4869]: E0127 10:20:21.880871 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a063489c14d2b40df008e4e2f61359bd8db9fbe378edc47ae5167941e00932c\": container with ID starting with 5a063489c14d2b40df008e4e2f61359bd8db9fbe378edc47ae5167941e00932c not found: ID does not exist" containerID="5a063489c14d2b40df008e4e2f61359bd8db9fbe378edc47ae5167941e00932c" Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.880910 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a063489c14d2b40df008e4e2f61359bd8db9fbe378edc47ae5167941e00932c"} err="failed to get container status \"5a063489c14d2b40df008e4e2f61359bd8db9fbe378edc47ae5167941e00932c\": rpc error: code = NotFound desc = could not find container \"5a063489c14d2b40df008e4e2f61359bd8db9fbe378edc47ae5167941e00932c\": container with ID starting with 5a063489c14d2b40df008e4e2f61359bd8db9fbe378edc47ae5167941e00932c not found: ID does not exist" Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.880936 4869 scope.go:117] "RemoveContainer" containerID="2a788f942960f9e99369cd9f65c12e49f80d94ad3deb267c52f9dbc519582d7f" Jan 27 10:20:21 crc kubenswrapper[4869]: E0127 10:20:21.881180 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a788f942960f9e99369cd9f65c12e49f80d94ad3deb267c52f9dbc519582d7f\": container with ID starting with 2a788f942960f9e99369cd9f65c12e49f80d94ad3deb267c52f9dbc519582d7f not found: ID does not exist" containerID="2a788f942960f9e99369cd9f65c12e49f80d94ad3deb267c52f9dbc519582d7f" Jan 27 10:20:21 crc kubenswrapper[4869]: I0127 10:20:21.881232 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a788f942960f9e99369cd9f65c12e49f80d94ad3deb267c52f9dbc519582d7f"} err="failed to get container status \"2a788f942960f9e99369cd9f65c12e49f80d94ad3deb267c52f9dbc519582d7f\": rpc error: code = NotFound desc = could not find container \"2a788f942960f9e99369cd9f65c12e49f80d94ad3deb267c52f9dbc519582d7f\": container with ID starting with 2a788f942960f9e99369cd9f65c12e49f80d94ad3deb267c52f9dbc519582d7f not found: ID does not exist" Jan 27 10:20:22 crc kubenswrapper[4869]: I0127 10:20:22.042669 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="693d7bac-2ac2-4031-b8c2-1ed27ded1fb7" path="/var/lib/kubelet/pods/693d7bac-2ac2-4031-b8c2-1ed27ded1fb7/volumes" Jan 27 10:20:22 crc kubenswrapper[4869]: I0127 10:20:22.122396 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4grt"] Jan 27 10:20:22 crc kubenswrapper[4869]: I0127 10:20:22.767314 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c4grt" podUID="0e70b21b-1833-4078-b2ce-3c2ace700666" containerName="registry-server" containerID="cri-o://2af940a67c1ffab4b7ee43796e304345ac658f8f10e063b431cba49601f28a81" gracePeriod=2 Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.155902 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4grt" Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.219055 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dwcp\" (UniqueName: \"kubernetes.io/projected/0e70b21b-1833-4078-b2ce-3c2ace700666-kube-api-access-4dwcp\") pod \"0e70b21b-1833-4078-b2ce-3c2ace700666\" (UID: \"0e70b21b-1833-4078-b2ce-3c2ace700666\") " Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.219099 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e70b21b-1833-4078-b2ce-3c2ace700666-utilities\") pod \"0e70b21b-1833-4078-b2ce-3c2ace700666\" (UID: \"0e70b21b-1833-4078-b2ce-3c2ace700666\") " Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.219150 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e70b21b-1833-4078-b2ce-3c2ace700666-catalog-content\") pod \"0e70b21b-1833-4078-b2ce-3c2ace700666\" (UID: \"0e70b21b-1833-4078-b2ce-3c2ace700666\") " Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.220400 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e70b21b-1833-4078-b2ce-3c2ace700666-utilities" (OuterVolumeSpecName: "utilities") pod "0e70b21b-1833-4078-b2ce-3c2ace700666" (UID: "0e70b21b-1833-4078-b2ce-3c2ace700666"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.224239 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e70b21b-1833-4078-b2ce-3c2ace700666-kube-api-access-4dwcp" (OuterVolumeSpecName: "kube-api-access-4dwcp") pod "0e70b21b-1833-4078-b2ce-3c2ace700666" (UID: "0e70b21b-1833-4078-b2ce-3c2ace700666"). InnerVolumeSpecName "kube-api-access-4dwcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.274291 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e70b21b-1833-4078-b2ce-3c2ace700666-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e70b21b-1833-4078-b2ce-3c2ace700666" (UID: "0e70b21b-1833-4078-b2ce-3c2ace700666"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.321781 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dwcp\" (UniqueName: \"kubernetes.io/projected/0e70b21b-1833-4078-b2ce-3c2ace700666-kube-api-access-4dwcp\") on node \"crc\" DevicePath \"\"" Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.321839 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e70b21b-1833-4078-b2ce-3c2ace700666-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.321850 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e70b21b-1833-4078-b2ce-3c2ace700666-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.774824 4869 generic.go:334] "Generic (PLEG): container finished" podID="0e70b21b-1833-4078-b2ce-3c2ace700666" containerID="2af940a67c1ffab4b7ee43796e304345ac658f8f10e063b431cba49601f28a81" exitCode=0 Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.774891 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4grt" Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.774895 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4grt" event={"ID":"0e70b21b-1833-4078-b2ce-3c2ace700666","Type":"ContainerDied","Data":"2af940a67c1ffab4b7ee43796e304345ac658f8f10e063b431cba49601f28a81"} Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.775291 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4grt" event={"ID":"0e70b21b-1833-4078-b2ce-3c2ace700666","Type":"ContainerDied","Data":"85804694968e03d6a6f5636b7bf835a79f8231a9f28522b60a1928482b1e80c8"} Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.775334 4869 scope.go:117] "RemoveContainer" containerID="2af940a67c1ffab4b7ee43796e304345ac658f8f10e063b431cba49601f28a81" Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.795087 4869 scope.go:117] "RemoveContainer" containerID="de2c3ace15c0236f8d549d2020a7257a5e50175412b84b00148fe628883d4888" Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.809871 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4grt"] Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.819060 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c4grt"] Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.830097 4869 scope.go:117] "RemoveContainer" containerID="03ad18b5b54b99b32d56f27ffbdcf1c333b8d8ef38b83ca44576a4dbb2f5c1c5" Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.847139 4869 scope.go:117] "RemoveContainer" containerID="2af940a67c1ffab4b7ee43796e304345ac658f8f10e063b431cba49601f28a81" Jan 27 10:20:23 crc kubenswrapper[4869]: E0127 10:20:23.847583 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af940a67c1ffab4b7ee43796e304345ac658f8f10e063b431cba49601f28a81\": container with ID starting with 2af940a67c1ffab4b7ee43796e304345ac658f8f10e063b431cba49601f28a81 not found: ID does not exist" containerID="2af940a67c1ffab4b7ee43796e304345ac658f8f10e063b431cba49601f28a81" Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.847643 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af940a67c1ffab4b7ee43796e304345ac658f8f10e063b431cba49601f28a81"} err="failed to get container status \"2af940a67c1ffab4b7ee43796e304345ac658f8f10e063b431cba49601f28a81\": rpc error: code = NotFound desc = could not find container \"2af940a67c1ffab4b7ee43796e304345ac658f8f10e063b431cba49601f28a81\": container with ID starting with 2af940a67c1ffab4b7ee43796e304345ac658f8f10e063b431cba49601f28a81 not found: ID does not exist" Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.847673 4869 scope.go:117] "RemoveContainer" containerID="de2c3ace15c0236f8d549d2020a7257a5e50175412b84b00148fe628883d4888" Jan 27 10:20:23 crc kubenswrapper[4869]: E0127 10:20:23.848006 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de2c3ace15c0236f8d549d2020a7257a5e50175412b84b00148fe628883d4888\": container with ID starting with de2c3ace15c0236f8d549d2020a7257a5e50175412b84b00148fe628883d4888 not found: ID does not exist" containerID="de2c3ace15c0236f8d549d2020a7257a5e50175412b84b00148fe628883d4888" Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.848045 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de2c3ace15c0236f8d549d2020a7257a5e50175412b84b00148fe628883d4888"} err="failed to get container status \"de2c3ace15c0236f8d549d2020a7257a5e50175412b84b00148fe628883d4888\": rpc error: code = NotFound desc = could not find container \"de2c3ace15c0236f8d549d2020a7257a5e50175412b84b00148fe628883d4888\": container with ID starting with de2c3ace15c0236f8d549d2020a7257a5e50175412b84b00148fe628883d4888 not found: ID does not exist" Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.848073 4869 scope.go:117] "RemoveContainer" containerID="03ad18b5b54b99b32d56f27ffbdcf1c333b8d8ef38b83ca44576a4dbb2f5c1c5" Jan 27 10:20:23 crc kubenswrapper[4869]: E0127 10:20:23.848287 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03ad18b5b54b99b32d56f27ffbdcf1c333b8d8ef38b83ca44576a4dbb2f5c1c5\": container with ID starting with 03ad18b5b54b99b32d56f27ffbdcf1c333b8d8ef38b83ca44576a4dbb2f5c1c5 not found: ID does not exist" containerID="03ad18b5b54b99b32d56f27ffbdcf1c333b8d8ef38b83ca44576a4dbb2f5c1c5" Jan 27 10:20:23 crc kubenswrapper[4869]: I0127 10:20:23.848317 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ad18b5b54b99b32d56f27ffbdcf1c333b8d8ef38b83ca44576a4dbb2f5c1c5"} err="failed to get container status \"03ad18b5b54b99b32d56f27ffbdcf1c333b8d8ef38b83ca44576a4dbb2f5c1c5\": rpc error: code = NotFound desc = could not find container \"03ad18b5b54b99b32d56f27ffbdcf1c333b8d8ef38b83ca44576a4dbb2f5c1c5\": container with ID starting with 03ad18b5b54b99b32d56f27ffbdcf1c333b8d8ef38b83ca44576a4dbb2f5c1c5 not found: ID does not exist" Jan 27 10:20:24 crc kubenswrapper[4869]: I0127 10:20:24.032628 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:20:24 crc kubenswrapper[4869]: I0127 10:20:24.032705 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:20:24 crc kubenswrapper[4869]: E0127 10:20:24.033007 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:20:24 crc kubenswrapper[4869]: E0127 10:20:24.033200 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:20:24 crc kubenswrapper[4869]: I0127 10:20:24.041096 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e70b21b-1833-4078-b2ce-3c2ace700666" path="/var/lib/kubelet/pods/0e70b21b-1833-4078-b2ce-3c2ace700666/volumes" Jan 27 10:20:27 crc kubenswrapper[4869]: I0127 10:20:27.032946 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:20:27 crc kubenswrapper[4869]: E0127 10:20:27.033185 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:20:36 crc kubenswrapper[4869]: I0127 10:20:36.034126 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:20:36 crc kubenswrapper[4869]: I0127 10:20:36.889041 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerStarted","Data":"91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b"} Jan 27 10:20:36 crc kubenswrapper[4869]: I0127 10:20:36.889517 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:20:39 crc kubenswrapper[4869]: I0127 10:20:39.033750 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:20:39 crc kubenswrapper[4869]: E0127 10:20:39.034632 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.033256 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:20:40 crc kubenswrapper[4869]: E0127 10:20:40.033762 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.328930 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zwb55"] Jan 27 10:20:40 crc kubenswrapper[4869]: E0127 10:20:40.329308 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="693d7bac-2ac2-4031-b8c2-1ed27ded1fb7" containerName="registry-server" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.329321 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="693d7bac-2ac2-4031-b8c2-1ed27ded1fb7" containerName="registry-server" Jan 27 10:20:40 crc kubenswrapper[4869]: E0127 10:20:40.329332 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e70b21b-1833-4078-b2ce-3c2ace700666" containerName="extract-utilities" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.329338 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e70b21b-1833-4078-b2ce-3c2ace700666" containerName="extract-utilities" Jan 27 10:20:40 crc kubenswrapper[4869]: E0127 10:20:40.329349 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="693d7bac-2ac2-4031-b8c2-1ed27ded1fb7" containerName="extract-content" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.329355 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="693d7bac-2ac2-4031-b8c2-1ed27ded1fb7" containerName="extract-content" Jan 27 10:20:40 crc kubenswrapper[4869]: E0127 10:20:40.329370 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="693d7bac-2ac2-4031-b8c2-1ed27ded1fb7" containerName="extract-utilities" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.329375 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="693d7bac-2ac2-4031-b8c2-1ed27ded1fb7" containerName="extract-utilities" Jan 27 10:20:40 crc kubenswrapper[4869]: E0127 10:20:40.329387 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e70b21b-1833-4078-b2ce-3c2ace700666" containerName="extract-content" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.329393 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e70b21b-1833-4078-b2ce-3c2ace700666" containerName="extract-content" Jan 27 10:20:40 crc kubenswrapper[4869]: E0127 10:20:40.329404 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e70b21b-1833-4078-b2ce-3c2ace700666" containerName="registry-server" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.329409 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e70b21b-1833-4078-b2ce-3c2ace700666" containerName="registry-server" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.329577 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e70b21b-1833-4078-b2ce-3c2ace700666" containerName="registry-server" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.329611 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="693d7bac-2ac2-4031-b8c2-1ed27ded1fb7" containerName="registry-server" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.330779 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwb55" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.338207 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwb55"] Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.493980 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-utilities\") pod \"redhat-marketplace-zwb55\" (UID: \"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39\") " pod="openshift-marketplace/redhat-marketplace-zwb55" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.494025 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-catalog-content\") pod \"redhat-marketplace-zwb55\" (UID: \"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39\") " pod="openshift-marketplace/redhat-marketplace-zwb55" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.494256 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6hz6\" (UniqueName: \"kubernetes.io/projected/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-kube-api-access-x6hz6\") pod \"redhat-marketplace-zwb55\" (UID: \"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39\") " pod="openshift-marketplace/redhat-marketplace-zwb55" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.596229 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6hz6\" (UniqueName: \"kubernetes.io/projected/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-kube-api-access-x6hz6\") pod \"redhat-marketplace-zwb55\" (UID: \"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39\") " pod="openshift-marketplace/redhat-marketplace-zwb55" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.596337 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-catalog-content\") pod \"redhat-marketplace-zwb55\" (UID: \"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39\") " pod="openshift-marketplace/redhat-marketplace-zwb55" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.596355 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-utilities\") pod \"redhat-marketplace-zwb55\" (UID: \"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39\") " pod="openshift-marketplace/redhat-marketplace-zwb55" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.596918 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-utilities\") pod \"redhat-marketplace-zwb55\" (UID: \"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39\") " pod="openshift-marketplace/redhat-marketplace-zwb55" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.596917 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-catalog-content\") pod \"redhat-marketplace-zwb55\" (UID: \"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39\") " pod="openshift-marketplace/redhat-marketplace-zwb55" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.615522 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6hz6\" (UniqueName: \"kubernetes.io/projected/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-kube-api-access-x6hz6\") pod \"redhat-marketplace-zwb55\" (UID: \"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39\") " pod="openshift-marketplace/redhat-marketplace-zwb55" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.706321 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwb55" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.921339 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" exitCode=0 Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.921414 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerDied","Data":"91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b"} Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.921719 4869 scope.go:117] "RemoveContainer" containerID="a20b4a89ba32824268f73f0987e739cda2d59d9328ca1c869a54f7cd70857969" Jan 27 10:20:40 crc kubenswrapper[4869]: I0127 10:20:40.922258 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:20:40 crc kubenswrapper[4869]: E0127 10:20:40.922642 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:20:41 crc kubenswrapper[4869]: I0127 10:20:41.161631 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwb55"] Jan 27 10:20:41 crc kubenswrapper[4869]: W0127 10:20:41.165997 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb0101d0_e7fc_41c5_a5b4_efb5a7ca1f39.slice/crio-12facab947184bbc4b7febe68dcfbbbe2d12704bf1547dbc33955dfa38f8c103 WatchSource:0}: Error finding container 12facab947184bbc4b7febe68dcfbbbe2d12704bf1547dbc33955dfa38f8c103: Status 404 returned error can't find the container with id 12facab947184bbc4b7febe68dcfbbbe2d12704bf1547dbc33955dfa38f8c103 Jan 27 10:20:41 crc kubenswrapper[4869]: I0127 10:20:41.929480 4869 generic.go:334] "Generic (PLEG): container finished" podID="cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39" containerID="54846ec217f684c8ce97e6254cc70f5c9be758cc2b2879e9ecbeb501aadc0a28" exitCode=0 Jan 27 10:20:41 crc kubenswrapper[4869]: I0127 10:20:41.930022 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwb55" event={"ID":"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39","Type":"ContainerDied","Data":"54846ec217f684c8ce97e6254cc70f5c9be758cc2b2879e9ecbeb501aadc0a28"} Jan 27 10:20:41 crc kubenswrapper[4869]: I0127 10:20:41.930068 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwb55" event={"ID":"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39","Type":"ContainerStarted","Data":"12facab947184bbc4b7febe68dcfbbbe2d12704bf1547dbc33955dfa38f8c103"} Jan 27 10:20:42 crc kubenswrapper[4869]: I0127 10:20:42.943809 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwb55" event={"ID":"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39","Type":"ContainerStarted","Data":"e4c18febb94e6fa0fbdf39c9561a7495687c0eb5156bd5aee71cf26a763d9b18"} Jan 27 10:20:43 crc kubenswrapper[4869]: I0127 10:20:43.953564 4869 generic.go:334] "Generic (PLEG): container finished" podID="cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39" containerID="e4c18febb94e6fa0fbdf39c9561a7495687c0eb5156bd5aee71cf26a763d9b18" exitCode=0 Jan 27 10:20:43 crc kubenswrapper[4869]: I0127 10:20:43.953672 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwb55" event={"ID":"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39","Type":"ContainerDied","Data":"e4c18febb94e6fa0fbdf39c9561a7495687c0eb5156bd5aee71cf26a763d9b18"} Jan 27 10:20:44 crc kubenswrapper[4869]: I0127 10:20:44.961888 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwb55" event={"ID":"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39","Type":"ContainerStarted","Data":"f4764f32a7bbb45858ac5957354247639b93d2996890b1ca503ad574e1095298"} Jan 27 10:20:44 crc kubenswrapper[4869]: I0127 10:20:44.983560 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zwb55" podStartSLOduration=2.527346567 podStartE2EDuration="4.983544786s" podCreationTimestamp="2026-01-27 10:20:40 +0000 UTC" firstStartedPulling="2026-01-27 10:20:41.932236756 +0000 UTC m=+1610.552660849" lastFinishedPulling="2026-01-27 10:20:44.388434985 +0000 UTC m=+1613.008859068" observedRunningTime="2026-01-27 10:20:44.978144999 +0000 UTC m=+1613.598569082" watchObservedRunningTime="2026-01-27 10:20:44.983544786 +0000 UTC m=+1613.603968869" Jan 27 10:20:50 crc kubenswrapper[4869]: I0127 10:20:50.707339 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zwb55" Jan 27 10:20:50 crc kubenswrapper[4869]: I0127 10:20:50.707394 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zwb55" Jan 27 10:20:50 crc kubenswrapper[4869]: I0127 10:20:50.753523 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zwb55" Jan 27 10:20:51 crc kubenswrapper[4869]: I0127 10:20:51.033222 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:20:51 crc kubenswrapper[4869]: E0127 10:20:51.033799 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:20:51 crc kubenswrapper[4869]: I0127 10:20:51.039749 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zwb55" Jan 27 10:20:51 crc kubenswrapper[4869]: I0127 10:20:51.085929 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwb55"] Jan 27 10:20:53 crc kubenswrapper[4869]: I0127 10:20:53.014305 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zwb55" podUID="cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39" containerName="registry-server" containerID="cri-o://f4764f32a7bbb45858ac5957354247639b93d2996890b1ca503ad574e1095298" gracePeriod=2 Jan 27 10:20:53 crc kubenswrapper[4869]: I0127 10:20:53.033120 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:20:53 crc kubenswrapper[4869]: I0127 10:20:53.424233 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwb55" Jan 27 10:20:53 crc kubenswrapper[4869]: I0127 10:20:53.509596 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-catalog-content\") pod \"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39\" (UID: \"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39\") " Jan 27 10:20:53 crc kubenswrapper[4869]: I0127 10:20:53.509643 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-utilities\") pod \"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39\" (UID: \"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39\") " Jan 27 10:20:53 crc kubenswrapper[4869]: I0127 10:20:53.509760 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6hz6\" (UniqueName: \"kubernetes.io/projected/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-kube-api-access-x6hz6\") pod \"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39\" (UID: \"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39\") " Jan 27 10:20:53 crc kubenswrapper[4869]: I0127 10:20:53.510908 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-utilities" (OuterVolumeSpecName: "utilities") pod "cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39" (UID: "cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:20:53 crc kubenswrapper[4869]: I0127 10:20:53.521029 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-kube-api-access-x6hz6" (OuterVolumeSpecName: "kube-api-access-x6hz6") pod "cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39" (UID: "cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39"). InnerVolumeSpecName "kube-api-access-x6hz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:20:53 crc kubenswrapper[4869]: I0127 10:20:53.538192 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39" (UID: "cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:20:53 crc kubenswrapper[4869]: I0127 10:20:53.611357 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:20:53 crc kubenswrapper[4869]: I0127 10:20:53.611395 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:20:53 crc kubenswrapper[4869]: I0127 10:20:53.611406 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6hz6\" (UniqueName: \"kubernetes.io/projected/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39-kube-api-access-x6hz6\") on node \"crc\" DevicePath \"\"" Jan 27 10:20:53 crc kubenswrapper[4869]: I0127 10:20:53.904492 4869 scope.go:117] "RemoveContainer" containerID="6bf82af922b85d626cea63b3634be750c808560cba053f73ffeec66c8e6f02dd" Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.025957 4869 generic.go:334] "Generic (PLEG): container finished" podID="cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39" containerID="f4764f32a7bbb45858ac5957354247639b93d2996890b1ca503ad574e1095298" exitCode=0 Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.026019 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwb55" Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.026051 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwb55" event={"ID":"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39","Type":"ContainerDied","Data":"f4764f32a7bbb45858ac5957354247639b93d2996890b1ca503ad574e1095298"} Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.026115 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwb55" event={"ID":"cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39","Type":"ContainerDied","Data":"12facab947184bbc4b7febe68dcfbbbe2d12704bf1547dbc33955dfa38f8c103"} Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.026140 4869 scope.go:117] "RemoveContainer" containerID="f4764f32a7bbb45858ac5957354247639b93d2996890b1ca503ad574e1095298" Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.028810 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerStarted","Data":"2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b"} Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.029040 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.032754 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:20:54 crc kubenswrapper[4869]: E0127 10:20:54.033033 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.043601 4869 scope.go:117] "RemoveContainer" containerID="e4c18febb94e6fa0fbdf39c9561a7495687c0eb5156bd5aee71cf26a763d9b18" Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.079271 4869 scope.go:117] "RemoveContainer" containerID="54846ec217f684c8ce97e6254cc70f5c9be758cc2b2879e9ecbeb501aadc0a28" Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.093255 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwb55"] Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.099156 4869 scope.go:117] "RemoveContainer" containerID="f4764f32a7bbb45858ac5957354247639b93d2996890b1ca503ad574e1095298" Jan 27 10:20:54 crc kubenswrapper[4869]: E0127 10:20:54.099655 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4764f32a7bbb45858ac5957354247639b93d2996890b1ca503ad574e1095298\": container with ID starting with f4764f32a7bbb45858ac5957354247639b93d2996890b1ca503ad574e1095298 not found: ID does not exist" containerID="f4764f32a7bbb45858ac5957354247639b93d2996890b1ca503ad574e1095298" Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.099691 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4764f32a7bbb45858ac5957354247639b93d2996890b1ca503ad574e1095298"} err="failed to get container status \"f4764f32a7bbb45858ac5957354247639b93d2996890b1ca503ad574e1095298\": rpc error: code = NotFound desc = could not find container \"f4764f32a7bbb45858ac5957354247639b93d2996890b1ca503ad574e1095298\": container with ID starting with f4764f32a7bbb45858ac5957354247639b93d2996890b1ca503ad574e1095298 not found: ID does not exist" Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.099715 4869 scope.go:117] "RemoveContainer" containerID="e4c18febb94e6fa0fbdf39c9561a7495687c0eb5156bd5aee71cf26a763d9b18" Jan 27 10:20:54 crc kubenswrapper[4869]: E0127 10:20:54.100045 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4c18febb94e6fa0fbdf39c9561a7495687c0eb5156bd5aee71cf26a763d9b18\": container with ID starting with e4c18febb94e6fa0fbdf39c9561a7495687c0eb5156bd5aee71cf26a763d9b18 not found: ID does not exist" containerID="e4c18febb94e6fa0fbdf39c9561a7495687c0eb5156bd5aee71cf26a763d9b18" Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.100080 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4c18febb94e6fa0fbdf39c9561a7495687c0eb5156bd5aee71cf26a763d9b18"} err="failed to get container status \"e4c18febb94e6fa0fbdf39c9561a7495687c0eb5156bd5aee71cf26a763d9b18\": rpc error: code = NotFound desc = could not find container \"e4c18febb94e6fa0fbdf39c9561a7495687c0eb5156bd5aee71cf26a763d9b18\": container with ID starting with e4c18febb94e6fa0fbdf39c9561a7495687c0eb5156bd5aee71cf26a763d9b18 not found: ID does not exist" Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.100107 4869 scope.go:117] "RemoveContainer" containerID="54846ec217f684c8ce97e6254cc70f5c9be758cc2b2879e9ecbeb501aadc0a28" Jan 27 10:20:54 crc kubenswrapper[4869]: E0127 10:20:54.100440 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54846ec217f684c8ce97e6254cc70f5c9be758cc2b2879e9ecbeb501aadc0a28\": container with ID starting with 54846ec217f684c8ce97e6254cc70f5c9be758cc2b2879e9ecbeb501aadc0a28 not found: ID does not exist" containerID="54846ec217f684c8ce97e6254cc70f5c9be758cc2b2879e9ecbeb501aadc0a28" Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.100463 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54846ec217f684c8ce97e6254cc70f5c9be758cc2b2879e9ecbeb501aadc0a28"} err="failed to get container status \"54846ec217f684c8ce97e6254cc70f5c9be758cc2b2879e9ecbeb501aadc0a28\": rpc error: code = NotFound desc = could not find container \"54846ec217f684c8ce97e6254cc70f5c9be758cc2b2879e9ecbeb501aadc0a28\": container with ID starting with 54846ec217f684c8ce97e6254cc70f5c9be758cc2b2879e9ecbeb501aadc0a28 not found: ID does not exist" Jan 27 10:20:54 crc kubenswrapper[4869]: I0127 10:20:54.101157 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwb55"] Jan 27 10:20:56 crc kubenswrapper[4869]: I0127 10:20:56.044584 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39" path="/var/lib/kubelet/pods/cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39/volumes" Jan 27 10:20:58 crc kubenswrapper[4869]: I0127 10:20:58.061617 4869 generic.go:334] "Generic (PLEG): container finished" podID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" exitCode=0 Jan 27 10:20:58 crc kubenswrapper[4869]: I0127 10:20:58.061698 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerDied","Data":"2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b"} Jan 27 10:20:58 crc kubenswrapper[4869]: I0127 10:20:58.061998 4869 scope.go:117] "RemoveContainer" containerID="46b282051c77f59c73cb1869f644c22955b3556e14721ac08ce101a141fa3df4" Jan 27 10:20:58 crc kubenswrapper[4869]: I0127 10:20:58.064409 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:20:58 crc kubenswrapper[4869]: E0127 10:20:58.066313 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:21:04 crc kubenswrapper[4869]: I0127 10:21:04.033695 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:21:04 crc kubenswrapper[4869]: E0127 10:21:04.036429 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:21:07 crc kubenswrapper[4869]: I0127 10:21:07.033882 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:21:07 crc kubenswrapper[4869]: E0127 10:21:07.034544 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:21:11 crc kubenswrapper[4869]: I0127 10:21:11.033443 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:21:11 crc kubenswrapper[4869]: E0127 10:21:11.034298 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:21:16 crc kubenswrapper[4869]: I0127 10:21:16.033326 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:21:16 crc kubenswrapper[4869]: E0127 10:21:16.034014 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:21:20 crc kubenswrapper[4869]: I0127 10:21:20.033406 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:21:20 crc kubenswrapper[4869]: E0127 10:21:20.034152 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:21:23 crc kubenswrapper[4869]: I0127 10:21:23.033075 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:21:23 crc kubenswrapper[4869]: E0127 10:21:23.033269 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:21:31 crc kubenswrapper[4869]: I0127 10:21:31.033417 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:21:31 crc kubenswrapper[4869]: E0127 10:21:31.034601 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:21:34 crc kubenswrapper[4869]: I0127 10:21:34.033226 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:21:34 crc kubenswrapper[4869]: E0127 10:21:34.033711 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:21:38 crc kubenswrapper[4869]: I0127 10:21:38.033809 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:21:38 crc kubenswrapper[4869]: E0127 10:21:38.034239 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:21:45 crc kubenswrapper[4869]: I0127 10:21:45.032917 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:21:45 crc kubenswrapper[4869]: E0127 10:21:45.033615 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:21:48 crc kubenswrapper[4869]: I0127 10:21:48.033418 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:21:48 crc kubenswrapper[4869]: E0127 10:21:48.033706 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:21:51 crc kubenswrapper[4869]: I0127 10:21:51.033679 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:21:51 crc kubenswrapper[4869]: E0127 10:21:51.034507 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:21:59 crc kubenswrapper[4869]: I0127 10:21:59.033063 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:21:59 crc kubenswrapper[4869]: E0127 10:21:59.033782 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:22:02 crc kubenswrapper[4869]: I0127 10:22:02.038521 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:22:02 crc kubenswrapper[4869]: E0127 10:22:02.039017 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:22:06 crc kubenswrapper[4869]: I0127 10:22:06.037053 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:22:06 crc kubenswrapper[4869]: E0127 10:22:06.037529 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:22:14 crc kubenswrapper[4869]: I0127 10:22:14.033671 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:22:14 crc kubenswrapper[4869]: I0127 10:22:14.034045 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:22:14 crc kubenswrapper[4869]: E0127 10:22:14.034166 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:22:14 crc kubenswrapper[4869]: E0127 10:22:14.034252 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:22:19 crc kubenswrapper[4869]: I0127 10:22:19.032565 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:22:19 crc kubenswrapper[4869]: E0127 10:22:19.033076 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:22:26 crc kubenswrapper[4869]: I0127 10:22:26.034154 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:22:26 crc kubenswrapper[4869]: I0127 10:22:26.035871 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:22:26 crc kubenswrapper[4869]: E0127 10:22:26.036058 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:22:26 crc kubenswrapper[4869]: E0127 10:22:26.036161 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:22:30 crc kubenswrapper[4869]: I0127 10:22:30.033617 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:22:30 crc kubenswrapper[4869]: E0127 10:22:30.034247 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:22:37 crc kubenswrapper[4869]: I0127 10:22:37.033935 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:22:37 crc kubenswrapper[4869]: E0127 10:22:37.034953 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:22:39 crc kubenswrapper[4869]: I0127 10:22:39.034005 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:22:39 crc kubenswrapper[4869]: E0127 10:22:39.034769 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:22:41 crc kubenswrapper[4869]: I0127 10:22:41.033703 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:22:41 crc kubenswrapper[4869]: E0127 10:22:41.034522 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:22:49 crc kubenswrapper[4869]: I0127 10:22:49.034451 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:22:49 crc kubenswrapper[4869]: E0127 10:22:49.035248 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:22:54 crc kubenswrapper[4869]: I0127 10:22:54.033948 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:22:54 crc kubenswrapper[4869]: E0127 10:22:54.035348 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:22:55 crc kubenswrapper[4869]: I0127 10:22:55.033845 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:22:55 crc kubenswrapper[4869]: E0127 10:22:55.034123 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:23:01 crc kubenswrapper[4869]: I0127 10:23:01.032591 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:23:01 crc kubenswrapper[4869]: E0127 10:23:01.034363 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:23:05 crc kubenswrapper[4869]: I0127 10:23:05.033363 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:23:05 crc kubenswrapper[4869]: E0127 10:23:05.034016 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:23:10 crc kubenswrapper[4869]: I0127 10:23:10.032559 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:23:10 crc kubenswrapper[4869]: E0127 10:23:10.033035 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:23:15 crc kubenswrapper[4869]: I0127 10:23:15.033467 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:23:15 crc kubenswrapper[4869]: E0127 10:23:15.034099 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:23:17 crc kubenswrapper[4869]: I0127 10:23:17.032613 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:23:17 crc kubenswrapper[4869]: E0127 10:23:17.033080 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:23:21 crc kubenswrapper[4869]: I0127 10:23:21.033139 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:23:21 crc kubenswrapper[4869]: E0127 10:23:21.033723 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:23:28 crc kubenswrapper[4869]: I0127 10:23:28.033392 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:23:28 crc kubenswrapper[4869]: E0127 10:23:28.034173 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:23:31 crc kubenswrapper[4869]: I0127 10:23:31.035005 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:23:31 crc kubenswrapper[4869]: E0127 10:23:31.036630 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:23:35 crc kubenswrapper[4869]: I0127 10:23:35.033555 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:23:35 crc kubenswrapper[4869]: E0127 10:23:35.034354 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:23:40 crc kubenswrapper[4869]: I0127 10:23:40.033363 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:23:40 crc kubenswrapper[4869]: E0127 10:23:40.033935 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:23:46 crc kubenswrapper[4869]: I0127 10:23:46.037145 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:23:46 crc kubenswrapper[4869]: E0127 10:23:46.037732 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:23:49 crc kubenswrapper[4869]: I0127 10:23:49.033744 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:23:49 crc kubenswrapper[4869]: E0127 10:23:49.034094 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:23:53 crc kubenswrapper[4869]: I0127 10:23:53.033666 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:23:53 crc kubenswrapper[4869]: E0127 10:23:53.034296 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:23:58 crc kubenswrapper[4869]: I0127 10:23:58.038690 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:23:58 crc kubenswrapper[4869]: E0127 10:23:58.039408 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:24:02 crc kubenswrapper[4869]: I0127 10:24:02.038879 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:24:02 crc kubenswrapper[4869]: E0127 10:24:02.039392 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:24:06 crc kubenswrapper[4869]: I0127 10:24:06.032669 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:24:06 crc kubenswrapper[4869]: E0127 10:24:06.033555 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:24:10 crc kubenswrapper[4869]: I0127 10:24:10.032958 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:24:10 crc kubenswrapper[4869]: E0127 10:24:10.033286 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:24:16 crc kubenswrapper[4869]: I0127 10:24:16.033524 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:24:16 crc kubenswrapper[4869]: E0127 10:24:16.034295 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:24:19 crc kubenswrapper[4869]: I0127 10:24:19.033261 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:24:19 crc kubenswrapper[4869]: E0127 10:24:19.033501 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:24:24 crc kubenswrapper[4869]: I0127 10:24:24.033901 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:24:24 crc kubenswrapper[4869]: E0127 10:24:24.034622 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:24:30 crc kubenswrapper[4869]: I0127 10:24:30.033217 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:24:30 crc kubenswrapper[4869]: E0127 10:24:30.033634 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:24:30 crc kubenswrapper[4869]: I0127 10:24:30.034744 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:24:30 crc kubenswrapper[4869]: E0127 10:24:30.035280 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:24:35 crc kubenswrapper[4869]: I0127 10:24:35.034512 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:24:35 crc kubenswrapper[4869]: E0127 10:24:35.035148 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:24:41 crc kubenswrapper[4869]: I0127 10:24:41.032729 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:24:41 crc kubenswrapper[4869]: E0127 10:24:41.034010 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:24:44 crc kubenswrapper[4869]: I0127 10:24:44.032980 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:24:44 crc kubenswrapper[4869]: E0127 10:24:44.033571 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:24:46 crc kubenswrapper[4869]: I0127 10:24:46.032966 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:24:46 crc kubenswrapper[4869]: E0127 10:24:46.033404 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:24:56 crc kubenswrapper[4869]: I0127 10:24:56.032967 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:24:56 crc kubenswrapper[4869]: E0127 10:24:56.033848 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:24:57 crc kubenswrapper[4869]: I0127 10:24:57.032942 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:24:58 crc kubenswrapper[4869]: I0127 10:24:58.033224 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:24:58 crc kubenswrapper[4869]: E0127 10:24:58.033935 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:24:58 crc kubenswrapper[4869]: I0127 10:24:58.215794 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerStarted","Data":"4bcba44fbcb50f71e5672c0ef23b355f7a8a7dd5428b67fc78b24dacaac2e337"} Jan 27 10:25:09 crc kubenswrapper[4869]: I0127 10:25:09.033473 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:25:09 crc kubenswrapper[4869]: E0127 10:25:09.034220 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:25:10 crc kubenswrapper[4869]: I0127 10:25:10.033727 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:25:10 crc kubenswrapper[4869]: E0127 10:25:10.034062 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:25:20 crc kubenswrapper[4869]: I0127 10:25:20.032851 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:25:20 crc kubenswrapper[4869]: E0127 10:25:20.033473 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:25:24 crc kubenswrapper[4869]: I0127 10:25:24.588114 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8w5b8"] Jan 27 10:25:24 crc kubenswrapper[4869]: E0127 10:25:24.589011 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39" containerName="extract-utilities" Jan 27 10:25:24 crc kubenswrapper[4869]: I0127 10:25:24.589028 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39" containerName="extract-utilities" Jan 27 10:25:24 crc kubenswrapper[4869]: E0127 10:25:24.589048 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39" containerName="registry-server" Jan 27 10:25:24 crc kubenswrapper[4869]: I0127 10:25:24.589055 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39" containerName="registry-server" Jan 27 10:25:24 crc kubenswrapper[4869]: E0127 10:25:24.589095 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39" containerName="extract-content" Jan 27 10:25:24 crc kubenswrapper[4869]: I0127 10:25:24.589103 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39" containerName="extract-content" Jan 27 10:25:24 crc kubenswrapper[4869]: I0127 10:25:24.589280 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb0101d0-e7fc-41c5-a5b4-efb5a7ca1f39" containerName="registry-server" Jan 27 10:25:24 crc kubenswrapper[4869]: I0127 10:25:24.590907 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w5b8" Jan 27 10:25:24 crc kubenswrapper[4869]: I0127 10:25:24.599782 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8w5b8"] Jan 27 10:25:24 crc kubenswrapper[4869]: I0127 10:25:24.686379 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-utilities\") pod \"certified-operators-8w5b8\" (UID: \"e8d9587c-7bef-427e-b2e3-51e49b3ee67e\") " pod="openshift-marketplace/certified-operators-8w5b8" Jan 27 10:25:24 crc kubenswrapper[4869]: I0127 10:25:24.686644 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-catalog-content\") pod \"certified-operators-8w5b8\" (UID: \"e8d9587c-7bef-427e-b2e3-51e49b3ee67e\") " pod="openshift-marketplace/certified-operators-8w5b8" Jan 27 10:25:24 crc kubenswrapper[4869]: I0127 10:25:24.686712 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fqqw\" (UniqueName: \"kubernetes.io/projected/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-kube-api-access-9fqqw\") pod \"certified-operators-8w5b8\" (UID: \"e8d9587c-7bef-427e-b2e3-51e49b3ee67e\") " pod="openshift-marketplace/certified-operators-8w5b8" Jan 27 10:25:24 crc kubenswrapper[4869]: I0127 10:25:24.787979 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-utilities\") pod \"certified-operators-8w5b8\" (UID: \"e8d9587c-7bef-427e-b2e3-51e49b3ee67e\") " pod="openshift-marketplace/certified-operators-8w5b8" Jan 27 10:25:24 crc kubenswrapper[4869]: I0127 10:25:24.788111 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-catalog-content\") pod \"certified-operators-8w5b8\" (UID: \"e8d9587c-7bef-427e-b2e3-51e49b3ee67e\") " pod="openshift-marketplace/certified-operators-8w5b8" Jan 27 10:25:24 crc kubenswrapper[4869]: I0127 10:25:24.788144 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fqqw\" (UniqueName: \"kubernetes.io/projected/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-kube-api-access-9fqqw\") pod \"certified-operators-8w5b8\" (UID: \"e8d9587c-7bef-427e-b2e3-51e49b3ee67e\") " pod="openshift-marketplace/certified-operators-8w5b8" Jan 27 10:25:24 crc kubenswrapper[4869]: I0127 10:25:24.788734 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-utilities\") pod \"certified-operators-8w5b8\" (UID: \"e8d9587c-7bef-427e-b2e3-51e49b3ee67e\") " pod="openshift-marketplace/certified-operators-8w5b8" Jan 27 10:25:24 crc kubenswrapper[4869]: I0127 10:25:24.788901 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-catalog-content\") pod \"certified-operators-8w5b8\" (UID: \"e8d9587c-7bef-427e-b2e3-51e49b3ee67e\") " pod="openshift-marketplace/certified-operators-8w5b8" Jan 27 10:25:24 crc kubenswrapper[4869]: I0127 10:25:24.807579 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fqqw\" (UniqueName: \"kubernetes.io/projected/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-kube-api-access-9fqqw\") pod \"certified-operators-8w5b8\" (UID: \"e8d9587c-7bef-427e-b2e3-51e49b3ee67e\") " pod="openshift-marketplace/certified-operators-8w5b8" Jan 27 10:25:24 crc kubenswrapper[4869]: I0127 10:25:24.914102 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w5b8" Jan 27 10:25:25 crc kubenswrapper[4869]: I0127 10:25:25.032739 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:25:25 crc kubenswrapper[4869]: E0127 10:25:25.033038 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:25:25 crc kubenswrapper[4869]: I0127 10:25:25.367963 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8w5b8"] Jan 27 10:25:25 crc kubenswrapper[4869]: I0127 10:25:25.406127 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w5b8" event={"ID":"e8d9587c-7bef-427e-b2e3-51e49b3ee67e","Type":"ContainerStarted","Data":"c8b72db276294b8153d3b7143fe92bb8ebe3c7a236b21c9f2dac2b8d90a7052b"} Jan 27 10:25:26 crc kubenswrapper[4869]: I0127 10:25:26.415976 4869 generic.go:334] "Generic (PLEG): container finished" podID="e8d9587c-7bef-427e-b2e3-51e49b3ee67e" containerID="1f540ec6289e649c3253c889bba7c9507e818221620103cbc76f2b3fd05431c0" exitCode=0 Jan 27 10:25:26 crc kubenswrapper[4869]: I0127 10:25:26.416050 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w5b8" event={"ID":"e8d9587c-7bef-427e-b2e3-51e49b3ee67e","Type":"ContainerDied","Data":"1f540ec6289e649c3253c889bba7c9507e818221620103cbc76f2b3fd05431c0"} Jan 27 10:25:26 crc kubenswrapper[4869]: I0127 10:25:26.420103 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 10:25:27 crc kubenswrapper[4869]: I0127 10:25:27.424001 4869 generic.go:334] "Generic (PLEG): container finished" podID="e8d9587c-7bef-427e-b2e3-51e49b3ee67e" containerID="a5de6e486d057fa844b6f449000178d34f4035b69a2469989a2a4a8bdc90b479" exitCode=0 Jan 27 10:25:27 crc kubenswrapper[4869]: I0127 10:25:27.424046 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w5b8" event={"ID":"e8d9587c-7bef-427e-b2e3-51e49b3ee67e","Type":"ContainerDied","Data":"a5de6e486d057fa844b6f449000178d34f4035b69a2469989a2a4a8bdc90b479"} Jan 27 10:25:28 crc kubenswrapper[4869]: I0127 10:25:28.432928 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w5b8" event={"ID":"e8d9587c-7bef-427e-b2e3-51e49b3ee67e","Type":"ContainerStarted","Data":"bd10c12e1f05346e21487496e4799573addecf334c25a0e8c007ae185e97af3f"} Jan 27 10:25:28 crc kubenswrapper[4869]: I0127 10:25:28.456164 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8w5b8" podStartSLOduration=3.044206102 podStartE2EDuration="4.456144724s" podCreationTimestamp="2026-01-27 10:25:24 +0000 UTC" firstStartedPulling="2026-01-27 10:25:26.418671813 +0000 UTC m=+1895.039095936" lastFinishedPulling="2026-01-27 10:25:27.830610475 +0000 UTC m=+1896.451034558" observedRunningTime="2026-01-27 10:25:28.448668262 +0000 UTC m=+1897.069092345" watchObservedRunningTime="2026-01-27 10:25:28.456144724 +0000 UTC m=+1897.076568817" Jan 27 10:25:31 crc kubenswrapper[4869]: I0127 10:25:31.033865 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:25:31 crc kubenswrapper[4869]: E0127 10:25:31.034571 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:25:34 crc kubenswrapper[4869]: I0127 10:25:34.914816 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8w5b8" Jan 27 10:25:34 crc kubenswrapper[4869]: I0127 10:25:34.915307 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8w5b8" Jan 27 10:25:34 crc kubenswrapper[4869]: I0127 10:25:34.960281 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8w5b8" Jan 27 10:25:35 crc kubenswrapper[4869]: I0127 10:25:35.531268 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8w5b8" Jan 27 10:25:35 crc kubenswrapper[4869]: I0127 10:25:35.573703 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8w5b8"] Jan 27 10:25:37 crc kubenswrapper[4869]: I0127 10:25:37.508536 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8w5b8" podUID="e8d9587c-7bef-427e-b2e3-51e49b3ee67e" containerName="registry-server" containerID="cri-o://bd10c12e1f05346e21487496e4799573addecf334c25a0e8c007ae185e97af3f" gracePeriod=2 Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:37.992306 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w5b8" Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.103903 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-catalog-content\") pod \"e8d9587c-7bef-427e-b2e3-51e49b3ee67e\" (UID: \"e8d9587c-7bef-427e-b2e3-51e49b3ee67e\") " Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.104151 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fqqw\" (UniqueName: \"kubernetes.io/projected/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-kube-api-access-9fqqw\") pod \"e8d9587c-7bef-427e-b2e3-51e49b3ee67e\" (UID: \"e8d9587c-7bef-427e-b2e3-51e49b3ee67e\") " Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.104229 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-utilities\") pod \"e8d9587c-7bef-427e-b2e3-51e49b3ee67e\" (UID: \"e8d9587c-7bef-427e-b2e3-51e49b3ee67e\") " Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.105116 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-utilities" (OuterVolumeSpecName: "utilities") pod "e8d9587c-7bef-427e-b2e3-51e49b3ee67e" (UID: "e8d9587c-7bef-427e-b2e3-51e49b3ee67e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.121347 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-kube-api-access-9fqqw" (OuterVolumeSpecName: "kube-api-access-9fqqw") pod "e8d9587c-7bef-427e-b2e3-51e49b3ee67e" (UID: "e8d9587c-7bef-427e-b2e3-51e49b3ee67e"). InnerVolumeSpecName "kube-api-access-9fqqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.206289 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fqqw\" (UniqueName: \"kubernetes.io/projected/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-kube-api-access-9fqqw\") on node \"crc\" DevicePath \"\"" Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.206310 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.516389 4869 generic.go:334] "Generic (PLEG): container finished" podID="e8d9587c-7bef-427e-b2e3-51e49b3ee67e" containerID="bd10c12e1f05346e21487496e4799573addecf334c25a0e8c007ae185e97af3f" exitCode=0 Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.516437 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w5b8" event={"ID":"e8d9587c-7bef-427e-b2e3-51e49b3ee67e","Type":"ContainerDied","Data":"bd10c12e1f05346e21487496e4799573addecf334c25a0e8c007ae185e97af3f"} Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.516470 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8w5b8" event={"ID":"e8d9587c-7bef-427e-b2e3-51e49b3ee67e","Type":"ContainerDied","Data":"c8b72db276294b8153d3b7143fe92bb8ebe3c7a236b21c9f2dac2b8d90a7052b"} Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.516490 4869 scope.go:117] "RemoveContainer" containerID="bd10c12e1f05346e21487496e4799573addecf334c25a0e8c007ae185e97af3f" Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.516518 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8w5b8" Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.545321 4869 scope.go:117] "RemoveContainer" containerID="a5de6e486d057fa844b6f449000178d34f4035b69a2469989a2a4a8bdc90b479" Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.573909 4869 scope.go:117] "RemoveContainer" containerID="1f540ec6289e649c3253c889bba7c9507e818221620103cbc76f2b3fd05431c0" Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.608401 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8d9587c-7bef-427e-b2e3-51e49b3ee67e" (UID: "e8d9587c-7bef-427e-b2e3-51e49b3ee67e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.612642 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d9587c-7bef-427e-b2e3-51e49b3ee67e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.624456 4869 scope.go:117] "RemoveContainer" containerID="bd10c12e1f05346e21487496e4799573addecf334c25a0e8c007ae185e97af3f" Jan 27 10:25:38 crc kubenswrapper[4869]: E0127 10:25:38.625000 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd10c12e1f05346e21487496e4799573addecf334c25a0e8c007ae185e97af3f\": container with ID starting with bd10c12e1f05346e21487496e4799573addecf334c25a0e8c007ae185e97af3f not found: ID does not exist" containerID="bd10c12e1f05346e21487496e4799573addecf334c25a0e8c007ae185e97af3f" Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.625029 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd10c12e1f05346e21487496e4799573addecf334c25a0e8c007ae185e97af3f"} err="failed to get container status \"bd10c12e1f05346e21487496e4799573addecf334c25a0e8c007ae185e97af3f\": rpc error: code = NotFound desc = could not find container \"bd10c12e1f05346e21487496e4799573addecf334c25a0e8c007ae185e97af3f\": container with ID starting with bd10c12e1f05346e21487496e4799573addecf334c25a0e8c007ae185e97af3f not found: ID does not exist" Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.625053 4869 scope.go:117] "RemoveContainer" containerID="a5de6e486d057fa844b6f449000178d34f4035b69a2469989a2a4a8bdc90b479" Jan 27 10:25:38 crc kubenswrapper[4869]: E0127 10:25:38.625279 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5de6e486d057fa844b6f449000178d34f4035b69a2469989a2a4a8bdc90b479\": container with ID starting with a5de6e486d057fa844b6f449000178d34f4035b69a2469989a2a4a8bdc90b479 not found: ID does not exist" containerID="a5de6e486d057fa844b6f449000178d34f4035b69a2469989a2a4a8bdc90b479" Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.625300 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5de6e486d057fa844b6f449000178d34f4035b69a2469989a2a4a8bdc90b479"} err="failed to get container status \"a5de6e486d057fa844b6f449000178d34f4035b69a2469989a2a4a8bdc90b479\": rpc error: code = NotFound desc = could not find container \"a5de6e486d057fa844b6f449000178d34f4035b69a2469989a2a4a8bdc90b479\": container with ID starting with a5de6e486d057fa844b6f449000178d34f4035b69a2469989a2a4a8bdc90b479 not found: ID does not exist" Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.625314 4869 scope.go:117] "RemoveContainer" containerID="1f540ec6289e649c3253c889bba7c9507e818221620103cbc76f2b3fd05431c0" Jan 27 10:25:38 crc kubenswrapper[4869]: E0127 10:25:38.625686 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f540ec6289e649c3253c889bba7c9507e818221620103cbc76f2b3fd05431c0\": container with ID starting with 1f540ec6289e649c3253c889bba7c9507e818221620103cbc76f2b3fd05431c0 not found: ID does not exist" containerID="1f540ec6289e649c3253c889bba7c9507e818221620103cbc76f2b3fd05431c0" Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.625704 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f540ec6289e649c3253c889bba7c9507e818221620103cbc76f2b3fd05431c0"} err="failed to get container status \"1f540ec6289e649c3253c889bba7c9507e818221620103cbc76f2b3fd05431c0\": rpc error: code = NotFound desc = could not find container \"1f540ec6289e649c3253c889bba7c9507e818221620103cbc76f2b3fd05431c0\": container with ID starting with 1f540ec6289e649c3253c889bba7c9507e818221620103cbc76f2b3fd05431c0 not found: ID does not exist" Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.849672 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8w5b8"] Jan 27 10:25:38 crc kubenswrapper[4869]: I0127 10:25:38.857283 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8w5b8"] Jan 27 10:25:39 crc kubenswrapper[4869]: I0127 10:25:39.033999 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:25:39 crc kubenswrapper[4869]: E0127 10:25:39.034266 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:25:40 crc kubenswrapper[4869]: I0127 10:25:40.043356 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d9587c-7bef-427e-b2e3-51e49b3ee67e" path="/var/lib/kubelet/pods/e8d9587c-7bef-427e-b2e3-51e49b3ee67e/volumes" Jan 27 10:25:45 crc kubenswrapper[4869]: I0127 10:25:45.032796 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:25:45 crc kubenswrapper[4869]: E0127 10:25:45.033498 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:25:53 crc kubenswrapper[4869]: I0127 10:25:53.032642 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:25:53 crc kubenswrapper[4869]: I0127 10:25:53.649886 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerStarted","Data":"74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089"} Jan 27 10:25:53 crc kubenswrapper[4869]: I0127 10:25:53.650357 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:25:56 crc kubenswrapper[4869]: I0127 10:25:56.033779 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:25:56 crc kubenswrapper[4869]: E0127 10:25:56.047413 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:25:57 crc kubenswrapper[4869]: I0127 10:25:57.682064 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" exitCode=0 Jan 27 10:25:57 crc kubenswrapper[4869]: I0127 10:25:57.682267 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerDied","Data":"74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089"} Jan 27 10:25:57 crc kubenswrapper[4869]: I0127 10:25:57.682403 4869 scope.go:117] "RemoveContainer" containerID="91b1f12a48b9524b0e3f1f47da91ccfdcd47a99b73ceff328c68b2c57e54655b" Jan 27 10:25:57 crc kubenswrapper[4869]: I0127 10:25:57.683161 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:25:57 crc kubenswrapper[4869]: E0127 10:25:57.683424 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:26:07 crc kubenswrapper[4869]: I0127 10:26:07.032822 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:26:07 crc kubenswrapper[4869]: I0127 10:26:07.786732 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerStarted","Data":"9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5"} Jan 27 10:26:07 crc kubenswrapper[4869]: I0127 10:26:07.787528 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 10:26:11 crc kubenswrapper[4869]: I0127 10:26:11.824227 4869 generic.go:334] "Generic (PLEG): container finished" podID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" exitCode=0 Jan 27 10:26:11 crc kubenswrapper[4869]: I0127 10:26:11.824295 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerDied","Data":"9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5"} Jan 27 10:26:11 crc kubenswrapper[4869]: I0127 10:26:11.824334 4869 scope.go:117] "RemoveContainer" containerID="2730321b13f479d7b85d6b9e20fa140af69a14577c5a395bd7315b814ea3692b" Jan 27 10:26:11 crc kubenswrapper[4869]: I0127 10:26:11.825263 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:26:11 crc kubenswrapper[4869]: E0127 10:26:11.825675 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:26:12 crc kubenswrapper[4869]: I0127 10:26:12.042404 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:26:12 crc kubenswrapper[4869]: E0127 10:26:12.042738 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:26:24 crc kubenswrapper[4869]: I0127 10:26:24.033074 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:26:24 crc kubenswrapper[4869]: E0127 10:26:24.033863 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:26:25 crc kubenswrapper[4869]: I0127 10:26:25.032949 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:26:25 crc kubenswrapper[4869]: E0127 10:26:25.033130 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:26:37 crc kubenswrapper[4869]: I0127 10:26:37.033947 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:26:37 crc kubenswrapper[4869]: E0127 10:26:37.035501 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:26:38 crc kubenswrapper[4869]: I0127 10:26:38.033962 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:26:38 crc kubenswrapper[4869]: E0127 10:26:38.034159 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:26:48 crc kubenswrapper[4869]: I0127 10:26:48.033393 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:26:48 crc kubenswrapper[4869]: E0127 10:26:48.034286 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:26:53 crc kubenswrapper[4869]: I0127 10:26:53.033269 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:26:53 crc kubenswrapper[4869]: E0127 10:26:53.033794 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:26:59 crc kubenswrapper[4869]: I0127 10:26:59.034118 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:26:59 crc kubenswrapper[4869]: E0127 10:26:59.035403 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:27:08 crc kubenswrapper[4869]: I0127 10:27:08.033348 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:27:08 crc kubenswrapper[4869]: E0127 10:27:08.034078 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:27:12 crc kubenswrapper[4869]: I0127 10:27:12.038270 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:27:12 crc kubenswrapper[4869]: E0127 10:27:12.039783 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:27:15 crc kubenswrapper[4869]: I0127 10:27:15.697951 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:27:15 crc kubenswrapper[4869]: I0127 10:27:15.698360 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:27:20 crc kubenswrapper[4869]: I0127 10:27:20.034264 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:27:20 crc kubenswrapper[4869]: E0127 10:27:20.034466 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:27:25 crc kubenswrapper[4869]: I0127 10:27:25.033391 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:27:25 crc kubenswrapper[4869]: E0127 10:27:25.034531 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:27:31 crc kubenswrapper[4869]: I0127 10:27:31.033434 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:27:31 crc kubenswrapper[4869]: E0127 10:27:31.034273 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:27:39 crc kubenswrapper[4869]: I0127 10:27:39.033259 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:27:39 crc kubenswrapper[4869]: E0127 10:27:39.036053 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:27:45 crc kubenswrapper[4869]: I0127 10:27:45.697823 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:27:45 crc kubenswrapper[4869]: I0127 10:27:45.698617 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:27:46 crc kubenswrapper[4869]: I0127 10:27:46.033094 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:27:46 crc kubenswrapper[4869]: E0127 10:27:46.033395 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:27:53 crc kubenswrapper[4869]: I0127 10:27:53.032903 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:27:53 crc kubenswrapper[4869]: E0127 10:27:53.033616 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:28:00 crc kubenswrapper[4869]: I0127 10:28:00.033430 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:28:00 crc kubenswrapper[4869]: E0127 10:28:00.034082 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:28:05 crc kubenswrapper[4869]: I0127 10:28:05.032863 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:28:05 crc kubenswrapper[4869]: E0127 10:28:05.033479 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:28:14 crc kubenswrapper[4869]: I0127 10:28:14.033976 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:28:14 crc kubenswrapper[4869]: E0127 10:28:14.035217 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:28:15 crc kubenswrapper[4869]: I0127 10:28:15.698218 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:28:15 crc kubenswrapper[4869]: I0127 10:28:15.698565 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:28:15 crc kubenswrapper[4869]: I0127 10:28:15.698625 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 10:28:15 crc kubenswrapper[4869]: I0127 10:28:15.699376 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4bcba44fbcb50f71e5672c0ef23b355f7a8a7dd5428b67fc78b24dacaac2e337"} pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:28:15 crc kubenswrapper[4869]: I0127 10:28:15.699429 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" containerID="cri-o://4bcba44fbcb50f71e5672c0ef23b355f7a8a7dd5428b67fc78b24dacaac2e337" gracePeriod=600 Jan 27 10:28:15 crc kubenswrapper[4869]: E0127 10:28:15.902406 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12a3e458_3f5f_46cf_b242_9a3986250bcf.slice/crio-4bcba44fbcb50f71e5672c0ef23b355f7a8a7dd5428b67fc78b24dacaac2e337.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12a3e458_3f5f_46cf_b242_9a3986250bcf.slice/crio-conmon-4bcba44fbcb50f71e5672c0ef23b355f7a8a7dd5428b67fc78b24dacaac2e337.scope\": RecentStats: unable to find data in memory cache]" Jan 27 10:28:16 crc kubenswrapper[4869]: I0127 10:28:16.033862 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:28:16 crc kubenswrapper[4869]: E0127 10:28:16.034371 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:28:16 crc kubenswrapper[4869]: I0127 10:28:16.805168 4869 generic.go:334] "Generic (PLEG): container finished" podID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerID="4bcba44fbcb50f71e5672c0ef23b355f7a8a7dd5428b67fc78b24dacaac2e337" exitCode=0 Jan 27 10:28:16 crc kubenswrapper[4869]: I0127 10:28:16.805246 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerDied","Data":"4bcba44fbcb50f71e5672c0ef23b355f7a8a7dd5428b67fc78b24dacaac2e337"} Jan 27 10:28:16 crc kubenswrapper[4869]: I0127 10:28:16.805387 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerStarted","Data":"64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490"} Jan 27 10:28:16 crc kubenswrapper[4869]: I0127 10:28:16.805405 4869 scope.go:117] "RemoveContainer" containerID="cfd405e14e4477a7f9b6e50c10b6d42a3d060e172294f8a4276f34e267c1a1b4" Jan 27 10:28:27 crc kubenswrapper[4869]: I0127 10:28:27.033187 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:28:27 crc kubenswrapper[4869]: E0127 10:28:27.033930 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:28:31 crc kubenswrapper[4869]: I0127 10:28:31.033189 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:28:31 crc kubenswrapper[4869]: E0127 10:28:31.033859 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:28:40 crc kubenswrapper[4869]: I0127 10:28:40.033618 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:28:40 crc kubenswrapper[4869]: E0127 10:28:40.034460 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:28:46 crc kubenswrapper[4869]: I0127 10:28:46.035279 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:28:46 crc kubenswrapper[4869]: E0127 10:28:46.036106 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:28:55 crc kubenswrapper[4869]: I0127 10:28:55.033033 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:28:55 crc kubenswrapper[4869]: E0127 10:28:55.033713 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:29:00 crc kubenswrapper[4869]: I0127 10:29:00.033132 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:29:00 crc kubenswrapper[4869]: E0127 10:29:00.034171 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:29:09 crc kubenswrapper[4869]: I0127 10:29:09.033251 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:29:09 crc kubenswrapper[4869]: E0127 10:29:09.052504 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:29:12 crc kubenswrapper[4869]: I0127 10:29:12.036619 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:29:12 crc kubenswrapper[4869]: E0127 10:29:12.036861 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:29:21 crc kubenswrapper[4869]: I0127 10:29:21.033648 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:29:21 crc kubenswrapper[4869]: E0127 10:29:21.034468 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:29:23 crc kubenswrapper[4869]: I0127 10:29:23.033596 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:29:23 crc kubenswrapper[4869]: E0127 10:29:23.034177 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:29:36 crc kubenswrapper[4869]: I0127 10:29:36.033632 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:29:36 crc kubenswrapper[4869]: I0127 10:29:36.034393 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:29:36 crc kubenswrapper[4869]: E0127 10:29:36.034714 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:29:36 crc kubenswrapper[4869]: E0127 10:29:36.034752 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:29:47 crc kubenswrapper[4869]: I0127 10:29:47.032875 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:29:47 crc kubenswrapper[4869]: E0127 10:29:47.033645 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:29:50 crc kubenswrapper[4869]: I0127 10:29:50.033283 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:29:50 crc kubenswrapper[4869]: E0127 10:29:50.033970 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.033704 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:30:00 crc kubenswrapper[4869]: E0127 10:30:00.034585 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.142349 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v"] Jan 27 10:30:00 crc kubenswrapper[4869]: E0127 10:30:00.142798 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d9587c-7bef-427e-b2e3-51e49b3ee67e" containerName="extract-utilities" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.142821 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d9587c-7bef-427e-b2e3-51e49b3ee67e" containerName="extract-utilities" Jan 27 10:30:00 crc kubenswrapper[4869]: E0127 10:30:00.142891 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d9587c-7bef-427e-b2e3-51e49b3ee67e" containerName="registry-server" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.142903 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d9587c-7bef-427e-b2e3-51e49b3ee67e" containerName="registry-server" Jan 27 10:30:00 crc kubenswrapper[4869]: E0127 10:30:00.142927 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d9587c-7bef-427e-b2e3-51e49b3ee67e" containerName="extract-content" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.142936 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d9587c-7bef-427e-b2e3-51e49b3ee67e" containerName="extract-content" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.143110 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d9587c-7bef-427e-b2e3-51e49b3ee67e" containerName="registry-server" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.143589 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.145250 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.145442 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.150871 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v"] Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.303812 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp9nk\" (UniqueName: \"kubernetes.io/projected/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-kube-api-access-vp9nk\") pod \"collect-profiles-29491830-csd8v\" (UID: \"e253a16b-0fb5-44b7-aa56-b878ced1a2f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.304165 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-secret-volume\") pod \"collect-profiles-29491830-csd8v\" (UID: \"e253a16b-0fb5-44b7-aa56-b878ced1a2f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.304273 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-config-volume\") pod \"collect-profiles-29491830-csd8v\" (UID: \"e253a16b-0fb5-44b7-aa56-b878ced1a2f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.405716 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-secret-volume\") pod \"collect-profiles-29491830-csd8v\" (UID: \"e253a16b-0fb5-44b7-aa56-b878ced1a2f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.405996 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-config-volume\") pod \"collect-profiles-29491830-csd8v\" (UID: \"e253a16b-0fb5-44b7-aa56-b878ced1a2f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.406092 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp9nk\" (UniqueName: \"kubernetes.io/projected/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-kube-api-access-vp9nk\") pod \"collect-profiles-29491830-csd8v\" (UID: \"e253a16b-0fb5-44b7-aa56-b878ced1a2f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.406893 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-config-volume\") pod \"collect-profiles-29491830-csd8v\" (UID: \"e253a16b-0fb5-44b7-aa56-b878ced1a2f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.414607 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-secret-volume\") pod \"collect-profiles-29491830-csd8v\" (UID: \"e253a16b-0fb5-44b7-aa56-b878ced1a2f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.424887 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp9nk\" (UniqueName: \"kubernetes.io/projected/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-kube-api-access-vp9nk\") pod \"collect-profiles-29491830-csd8v\" (UID: \"e253a16b-0fb5-44b7-aa56-b878ced1a2f9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.477410 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v" Jan 27 10:30:00 crc kubenswrapper[4869]: I0127 10:30:00.901878 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v"] Jan 27 10:30:00 crc kubenswrapper[4869]: W0127 10:30:00.906927 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode253a16b_0fb5_44b7_aa56_b878ced1a2f9.slice/crio-ee5f2259a3b0b821e434da913028c8c6c21573a79a3becc0912129e66c911c36 WatchSource:0}: Error finding container ee5f2259a3b0b821e434da913028c8c6c21573a79a3becc0912129e66c911c36: Status 404 returned error can't find the container with id ee5f2259a3b0b821e434da913028c8c6c21573a79a3becc0912129e66c911c36 Jan 27 10:30:01 crc kubenswrapper[4869]: I0127 10:30:01.033501 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:30:01 crc kubenswrapper[4869]: E0127 10:30:01.034186 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:30:01 crc kubenswrapper[4869]: I0127 10:30:01.887794 4869 generic.go:334] "Generic (PLEG): container finished" podID="e253a16b-0fb5-44b7-aa56-b878ced1a2f9" containerID="0cc4d233953635e2bb30e2df36b3d7d01ca71e5116057b5c033f1b7e51b70557" exitCode=0 Jan 27 10:30:01 crc kubenswrapper[4869]: I0127 10:30:01.887887 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v" event={"ID":"e253a16b-0fb5-44b7-aa56-b878ced1a2f9","Type":"ContainerDied","Data":"0cc4d233953635e2bb30e2df36b3d7d01ca71e5116057b5c033f1b7e51b70557"} Jan 27 10:30:01 crc kubenswrapper[4869]: I0127 10:30:01.888098 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v" event={"ID":"e253a16b-0fb5-44b7-aa56-b878ced1a2f9","Type":"ContainerStarted","Data":"ee5f2259a3b0b821e434da913028c8c6c21573a79a3becc0912129e66c911c36"} Jan 27 10:30:03 crc kubenswrapper[4869]: I0127 10:30:03.290646 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v" Jan 27 10:30:03 crc kubenswrapper[4869]: I0127 10:30:03.453125 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-config-volume\") pod \"e253a16b-0fb5-44b7-aa56-b878ced1a2f9\" (UID: \"e253a16b-0fb5-44b7-aa56-b878ced1a2f9\") " Jan 27 10:30:03 crc kubenswrapper[4869]: I0127 10:30:03.453199 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp9nk\" (UniqueName: \"kubernetes.io/projected/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-kube-api-access-vp9nk\") pod \"e253a16b-0fb5-44b7-aa56-b878ced1a2f9\" (UID: \"e253a16b-0fb5-44b7-aa56-b878ced1a2f9\") " Jan 27 10:30:03 crc kubenswrapper[4869]: I0127 10:30:03.453286 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-secret-volume\") pod \"e253a16b-0fb5-44b7-aa56-b878ced1a2f9\" (UID: \"e253a16b-0fb5-44b7-aa56-b878ced1a2f9\") " Jan 27 10:30:03 crc kubenswrapper[4869]: I0127 10:30:03.454055 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-config-volume" (OuterVolumeSpecName: "config-volume") pod "e253a16b-0fb5-44b7-aa56-b878ced1a2f9" (UID: "e253a16b-0fb5-44b7-aa56-b878ced1a2f9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:30:03 crc kubenswrapper[4869]: I0127 10:30:03.458859 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-kube-api-access-vp9nk" (OuterVolumeSpecName: "kube-api-access-vp9nk") pod "e253a16b-0fb5-44b7-aa56-b878ced1a2f9" (UID: "e253a16b-0fb5-44b7-aa56-b878ced1a2f9"). InnerVolumeSpecName "kube-api-access-vp9nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:30:03 crc kubenswrapper[4869]: I0127 10:30:03.460917 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e253a16b-0fb5-44b7-aa56-b878ced1a2f9" (UID: "e253a16b-0fb5-44b7-aa56-b878ced1a2f9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:30:03 crc kubenswrapper[4869]: I0127 10:30:03.555563 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 10:30:03 crc kubenswrapper[4869]: I0127 10:30:03.555606 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp9nk\" (UniqueName: \"kubernetes.io/projected/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-kube-api-access-vp9nk\") on node \"crc\" DevicePath \"\"" Jan 27 10:30:03 crc kubenswrapper[4869]: I0127 10:30:03.555623 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e253a16b-0fb5-44b7-aa56-b878ced1a2f9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 10:30:03 crc kubenswrapper[4869]: I0127 10:30:03.906732 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v" event={"ID":"e253a16b-0fb5-44b7-aa56-b878ced1a2f9","Type":"ContainerDied","Data":"ee5f2259a3b0b821e434da913028c8c6c21573a79a3becc0912129e66c911c36"} Jan 27 10:30:03 crc kubenswrapper[4869]: I0127 10:30:03.906777 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee5f2259a3b0b821e434da913028c8c6c21573a79a3becc0912129e66c911c36" Jan 27 10:30:03 crc kubenswrapper[4869]: I0127 10:30:03.906878 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491830-csd8v" Jan 27 10:30:04 crc kubenswrapper[4869]: I0127 10:30:04.361865 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z"] Jan 27 10:30:04 crc kubenswrapper[4869]: I0127 10:30:04.369758 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491785-vk98z"] Jan 27 10:30:06 crc kubenswrapper[4869]: I0127 10:30:06.049674 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a2ec119-d8f3-4edb-aa2f-d4ffd3617458" path="/var/lib/kubelet/pods/3a2ec119-d8f3-4edb-aa2f-d4ffd3617458/volumes" Jan 27 10:30:12 crc kubenswrapper[4869]: I0127 10:30:12.040011 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:30:12 crc kubenswrapper[4869]: E0127 10:30:12.040888 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:30:16 crc kubenswrapper[4869]: I0127 10:30:16.033882 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:30:16 crc kubenswrapper[4869]: E0127 10:30:16.034427 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:30:23 crc kubenswrapper[4869]: I0127 10:30:23.033662 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:30:23 crc kubenswrapper[4869]: E0127 10:30:23.034492 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:30:28 crc kubenswrapper[4869]: I0127 10:30:28.965773 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qqm6d"] Jan 27 10:30:28 crc kubenswrapper[4869]: E0127 10:30:28.966743 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e253a16b-0fb5-44b7-aa56-b878ced1a2f9" containerName="collect-profiles" Jan 27 10:30:28 crc kubenswrapper[4869]: I0127 10:30:28.966756 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e253a16b-0fb5-44b7-aa56-b878ced1a2f9" containerName="collect-profiles" Jan 27 10:30:28 crc kubenswrapper[4869]: I0127 10:30:28.966995 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e253a16b-0fb5-44b7-aa56-b878ced1a2f9" containerName="collect-profiles" Jan 27 10:30:28 crc kubenswrapper[4869]: I0127 10:30:28.968394 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqm6d" Jan 27 10:30:28 crc kubenswrapper[4869]: I0127 10:30:28.986224 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qqm6d"] Jan 27 10:30:29 crc kubenswrapper[4869]: I0127 10:30:29.033372 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:30:29 crc kubenswrapper[4869]: E0127 10:30:29.033600 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:30:29 crc kubenswrapper[4869]: I0127 10:30:29.067271 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaee9146-6ac6-4cac-9c5c-2e58489a875c-catalog-content\") pod \"redhat-operators-qqm6d\" (UID: \"eaee9146-6ac6-4cac-9c5c-2e58489a875c\") " pod="openshift-marketplace/redhat-operators-qqm6d" Jan 27 10:30:29 crc kubenswrapper[4869]: I0127 10:30:29.067332 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaee9146-6ac6-4cac-9c5c-2e58489a875c-utilities\") pod \"redhat-operators-qqm6d\" (UID: \"eaee9146-6ac6-4cac-9c5c-2e58489a875c\") " pod="openshift-marketplace/redhat-operators-qqm6d" Jan 27 10:30:29 crc kubenswrapper[4869]: I0127 10:30:29.067549 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx92d\" (UniqueName: \"kubernetes.io/projected/eaee9146-6ac6-4cac-9c5c-2e58489a875c-kube-api-access-vx92d\") pod \"redhat-operators-qqm6d\" (UID: \"eaee9146-6ac6-4cac-9c5c-2e58489a875c\") " pod="openshift-marketplace/redhat-operators-qqm6d" Jan 27 10:30:29 crc kubenswrapper[4869]: I0127 10:30:29.169547 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaee9146-6ac6-4cac-9c5c-2e58489a875c-catalog-content\") pod \"redhat-operators-qqm6d\" (UID: \"eaee9146-6ac6-4cac-9c5c-2e58489a875c\") " pod="openshift-marketplace/redhat-operators-qqm6d" Jan 27 10:30:29 crc kubenswrapper[4869]: I0127 10:30:29.169636 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaee9146-6ac6-4cac-9c5c-2e58489a875c-utilities\") pod \"redhat-operators-qqm6d\" (UID: \"eaee9146-6ac6-4cac-9c5c-2e58489a875c\") " pod="openshift-marketplace/redhat-operators-qqm6d" Jan 27 10:30:29 crc kubenswrapper[4869]: I0127 10:30:29.169770 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx92d\" (UniqueName: \"kubernetes.io/projected/eaee9146-6ac6-4cac-9c5c-2e58489a875c-kube-api-access-vx92d\") pod \"redhat-operators-qqm6d\" (UID: \"eaee9146-6ac6-4cac-9c5c-2e58489a875c\") " pod="openshift-marketplace/redhat-operators-qqm6d" Jan 27 10:30:29 crc kubenswrapper[4869]: I0127 10:30:29.170310 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaee9146-6ac6-4cac-9c5c-2e58489a875c-catalog-content\") pod \"redhat-operators-qqm6d\" (UID: \"eaee9146-6ac6-4cac-9c5c-2e58489a875c\") " pod="openshift-marketplace/redhat-operators-qqm6d" Jan 27 10:30:29 crc kubenswrapper[4869]: I0127 10:30:29.170384 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaee9146-6ac6-4cac-9c5c-2e58489a875c-utilities\") pod \"redhat-operators-qqm6d\" (UID: \"eaee9146-6ac6-4cac-9c5c-2e58489a875c\") " pod="openshift-marketplace/redhat-operators-qqm6d" Jan 27 10:30:29 crc kubenswrapper[4869]: I0127 10:30:29.192774 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx92d\" (UniqueName: \"kubernetes.io/projected/eaee9146-6ac6-4cac-9c5c-2e58489a875c-kube-api-access-vx92d\") pod \"redhat-operators-qqm6d\" (UID: \"eaee9146-6ac6-4cac-9c5c-2e58489a875c\") " pod="openshift-marketplace/redhat-operators-qqm6d" Jan 27 10:30:29 crc kubenswrapper[4869]: I0127 10:30:29.291937 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqm6d" Jan 27 10:30:29 crc kubenswrapper[4869]: I0127 10:30:29.703141 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qqm6d"] Jan 27 10:30:30 crc kubenswrapper[4869]: I0127 10:30:30.104444 4869 generic.go:334] "Generic (PLEG): container finished" podID="eaee9146-6ac6-4cac-9c5c-2e58489a875c" containerID="f73ae9a830ae158bb7a2b7aedd05978e763eb1757cbfcee5f164b992d5ec5723" exitCode=0 Jan 27 10:30:30 crc kubenswrapper[4869]: I0127 10:30:30.104536 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqm6d" event={"ID":"eaee9146-6ac6-4cac-9c5c-2e58489a875c","Type":"ContainerDied","Data":"f73ae9a830ae158bb7a2b7aedd05978e763eb1757cbfcee5f164b992d5ec5723"} Jan 27 10:30:30 crc kubenswrapper[4869]: I0127 10:30:30.104774 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqm6d" event={"ID":"eaee9146-6ac6-4cac-9c5c-2e58489a875c","Type":"ContainerStarted","Data":"f47ab29c9b4023ef906654454ea6ef756bf0f9901ccf7cae85084b14d6a31d73"} Jan 27 10:30:30 crc kubenswrapper[4869]: I0127 10:30:30.106338 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 10:30:32 crc kubenswrapper[4869]: I0127 10:30:32.123150 4869 generic.go:334] "Generic (PLEG): container finished" podID="eaee9146-6ac6-4cac-9c5c-2e58489a875c" containerID="c173e6788368230f2cba4413ac6f5e85c91b6f51f534846f81245feaea787153" exitCode=0 Jan 27 10:30:32 crc kubenswrapper[4869]: I0127 10:30:32.123502 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqm6d" event={"ID":"eaee9146-6ac6-4cac-9c5c-2e58489a875c","Type":"ContainerDied","Data":"c173e6788368230f2cba4413ac6f5e85c91b6f51f534846f81245feaea787153"} Jan 27 10:30:33 crc kubenswrapper[4869]: I0127 10:30:33.132716 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqm6d" event={"ID":"eaee9146-6ac6-4cac-9c5c-2e58489a875c","Type":"ContainerStarted","Data":"02260679072a60abe421a23c7382a09c49417ee51dd50492989a85e9d66e2669"} Jan 27 10:30:33 crc kubenswrapper[4869]: I0127 10:30:33.154428 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qqm6d" podStartSLOduration=2.513156397 podStartE2EDuration="5.154400638s" podCreationTimestamp="2026-01-27 10:30:28 +0000 UTC" firstStartedPulling="2026-01-27 10:30:30.106126942 +0000 UTC m=+2198.726551025" lastFinishedPulling="2026-01-27 10:30:32.747371193 +0000 UTC m=+2201.367795266" observedRunningTime="2026-01-27 10:30:33.150214458 +0000 UTC m=+2201.770638561" watchObservedRunningTime="2026-01-27 10:30:33.154400638 +0000 UTC m=+2201.774824751" Jan 27 10:30:35 crc kubenswrapper[4869]: I0127 10:30:35.045049 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:30:35 crc kubenswrapper[4869]: E0127 10:30:35.045848 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:30:39 crc kubenswrapper[4869]: I0127 10:30:39.292195 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qqm6d" Jan 27 10:30:39 crc kubenswrapper[4869]: I0127 10:30:39.292496 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qqm6d" Jan 27 10:30:39 crc kubenswrapper[4869]: I0127 10:30:39.332996 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qqm6d" Jan 27 10:30:40 crc kubenswrapper[4869]: I0127 10:30:40.234913 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qqm6d" Jan 27 10:30:40 crc kubenswrapper[4869]: I0127 10:30:40.301311 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qqm6d"] Jan 27 10:30:42 crc kubenswrapper[4869]: I0127 10:30:42.202510 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qqm6d" podUID="eaee9146-6ac6-4cac-9c5c-2e58489a875c" containerName="registry-server" containerID="cri-o://02260679072a60abe421a23c7382a09c49417ee51dd50492989a85e9d66e2669" gracePeriod=2 Jan 27 10:30:43 crc kubenswrapper[4869]: I0127 10:30:43.033295 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:30:43 crc kubenswrapper[4869]: E0127 10:30:43.033555 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:30:44 crc kubenswrapper[4869]: I0127 10:30:44.220603 4869 generic.go:334] "Generic (PLEG): container finished" podID="eaee9146-6ac6-4cac-9c5c-2e58489a875c" containerID="02260679072a60abe421a23c7382a09c49417ee51dd50492989a85e9d66e2669" exitCode=0 Jan 27 10:30:44 crc kubenswrapper[4869]: I0127 10:30:44.221160 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqm6d" event={"ID":"eaee9146-6ac6-4cac-9c5c-2e58489a875c","Type":"ContainerDied","Data":"02260679072a60abe421a23c7382a09c49417ee51dd50492989a85e9d66e2669"} Jan 27 10:30:44 crc kubenswrapper[4869]: I0127 10:30:44.523421 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqm6d" Jan 27 10:30:44 crc kubenswrapper[4869]: I0127 10:30:44.626193 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaee9146-6ac6-4cac-9c5c-2e58489a875c-catalog-content\") pod \"eaee9146-6ac6-4cac-9c5c-2e58489a875c\" (UID: \"eaee9146-6ac6-4cac-9c5c-2e58489a875c\") " Jan 27 10:30:44 crc kubenswrapper[4869]: I0127 10:30:44.626503 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaee9146-6ac6-4cac-9c5c-2e58489a875c-utilities\") pod \"eaee9146-6ac6-4cac-9c5c-2e58489a875c\" (UID: \"eaee9146-6ac6-4cac-9c5c-2e58489a875c\") " Jan 27 10:30:44 crc kubenswrapper[4869]: I0127 10:30:44.626606 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx92d\" (UniqueName: \"kubernetes.io/projected/eaee9146-6ac6-4cac-9c5c-2e58489a875c-kube-api-access-vx92d\") pod \"eaee9146-6ac6-4cac-9c5c-2e58489a875c\" (UID: \"eaee9146-6ac6-4cac-9c5c-2e58489a875c\") " Jan 27 10:30:44 crc kubenswrapper[4869]: I0127 10:30:44.627501 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaee9146-6ac6-4cac-9c5c-2e58489a875c-utilities" (OuterVolumeSpecName: "utilities") pod "eaee9146-6ac6-4cac-9c5c-2e58489a875c" (UID: "eaee9146-6ac6-4cac-9c5c-2e58489a875c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:30:44 crc kubenswrapper[4869]: I0127 10:30:44.636131 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaee9146-6ac6-4cac-9c5c-2e58489a875c-kube-api-access-vx92d" (OuterVolumeSpecName: "kube-api-access-vx92d") pod "eaee9146-6ac6-4cac-9c5c-2e58489a875c" (UID: "eaee9146-6ac6-4cac-9c5c-2e58489a875c"). InnerVolumeSpecName "kube-api-access-vx92d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:30:44 crc kubenswrapper[4869]: I0127 10:30:44.729319 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaee9146-6ac6-4cac-9c5c-2e58489a875c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:30:44 crc kubenswrapper[4869]: I0127 10:30:44.729365 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx92d\" (UniqueName: \"kubernetes.io/projected/eaee9146-6ac6-4cac-9c5c-2e58489a875c-kube-api-access-vx92d\") on node \"crc\" DevicePath \"\"" Jan 27 10:30:44 crc kubenswrapper[4869]: I0127 10:30:44.756736 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaee9146-6ac6-4cac-9c5c-2e58489a875c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eaee9146-6ac6-4cac-9c5c-2e58489a875c" (UID: "eaee9146-6ac6-4cac-9c5c-2e58489a875c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:30:44 crc kubenswrapper[4869]: I0127 10:30:44.831248 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaee9146-6ac6-4cac-9c5c-2e58489a875c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:30:45 crc kubenswrapper[4869]: I0127 10:30:45.230339 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqm6d" event={"ID":"eaee9146-6ac6-4cac-9c5c-2e58489a875c","Type":"ContainerDied","Data":"f47ab29c9b4023ef906654454ea6ef756bf0f9901ccf7cae85084b14d6a31d73"} Jan 27 10:30:45 crc kubenswrapper[4869]: I0127 10:30:45.230389 4869 scope.go:117] "RemoveContainer" containerID="02260679072a60abe421a23c7382a09c49417ee51dd50492989a85e9d66e2669" Jan 27 10:30:45 crc kubenswrapper[4869]: I0127 10:30:45.230424 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqm6d" Jan 27 10:30:45 crc kubenswrapper[4869]: I0127 10:30:45.275304 4869 scope.go:117] "RemoveContainer" containerID="c173e6788368230f2cba4413ac6f5e85c91b6f51f534846f81245feaea787153" Jan 27 10:30:45 crc kubenswrapper[4869]: I0127 10:30:45.276096 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qqm6d"] Jan 27 10:30:45 crc kubenswrapper[4869]: I0127 10:30:45.283555 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qqm6d"] Jan 27 10:30:45 crc kubenswrapper[4869]: I0127 10:30:45.329919 4869 scope.go:117] "RemoveContainer" containerID="f73ae9a830ae158bb7a2b7aedd05978e763eb1757cbfcee5f164b992d5ec5723" Jan 27 10:30:45 crc kubenswrapper[4869]: I0127 10:30:45.697760 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:30:45 crc kubenswrapper[4869]: I0127 10:30:45.697819 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:30:46 crc kubenswrapper[4869]: I0127 10:30:46.045274 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaee9146-6ac6-4cac-9c5c-2e58489a875c" path="/var/lib/kubelet/pods/eaee9146-6ac6-4cac-9c5c-2e58489a875c/volumes" Jan 27 10:30:49 crc kubenswrapper[4869]: I0127 10:30:49.033865 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:30:49 crc kubenswrapper[4869]: E0127 10:30:49.034492 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:30:54 crc kubenswrapper[4869]: I0127 10:30:54.175864 4869 scope.go:117] "RemoveContainer" containerID="81f2de3c56348d49357a97adfae12fb106f9dc64fbef3355806b6feb19137646" Jan 27 10:30:55 crc kubenswrapper[4869]: I0127 10:30:55.986812 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wfkqv"] Jan 27 10:30:55 crc kubenswrapper[4869]: E0127 10:30:55.988434 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaee9146-6ac6-4cac-9c5c-2e58489a875c" containerName="registry-server" Jan 27 10:30:55 crc kubenswrapper[4869]: I0127 10:30:55.988455 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaee9146-6ac6-4cac-9c5c-2e58489a875c" containerName="registry-server" Jan 27 10:30:55 crc kubenswrapper[4869]: E0127 10:30:55.988490 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaee9146-6ac6-4cac-9c5c-2e58489a875c" containerName="extract-content" Jan 27 10:30:55 crc kubenswrapper[4869]: I0127 10:30:55.988497 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaee9146-6ac6-4cac-9c5c-2e58489a875c" containerName="extract-content" Jan 27 10:30:55 crc kubenswrapper[4869]: E0127 10:30:55.988503 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaee9146-6ac6-4cac-9c5c-2e58489a875c" containerName="extract-utilities" Jan 27 10:30:55 crc kubenswrapper[4869]: I0127 10:30:55.988510 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaee9146-6ac6-4cac-9c5c-2e58489a875c" containerName="extract-utilities" Jan 27 10:30:55 crc kubenswrapper[4869]: I0127 10:30:55.988654 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaee9146-6ac6-4cac-9c5c-2e58489a875c" containerName="registry-server" Jan 27 10:30:55 crc kubenswrapper[4869]: I0127 10:30:55.990228 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfkqv" Jan 27 10:30:55 crc kubenswrapper[4869]: I0127 10:30:55.998628 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfkqv"] Jan 27 10:30:56 crc kubenswrapper[4869]: I0127 10:30:56.033406 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:30:56 crc kubenswrapper[4869]: E0127 10:30:56.033670 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:30:56 crc kubenswrapper[4869]: I0127 10:30:56.129768 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-catalog-content\") pod \"redhat-marketplace-wfkqv\" (UID: \"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf\") " pod="openshift-marketplace/redhat-marketplace-wfkqv" Jan 27 10:30:56 crc kubenswrapper[4869]: I0127 10:30:56.129954 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-utilities\") pod \"redhat-marketplace-wfkqv\" (UID: \"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf\") " pod="openshift-marketplace/redhat-marketplace-wfkqv" Jan 27 10:30:56 crc kubenswrapper[4869]: I0127 10:30:56.130380 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsnls\" (UniqueName: \"kubernetes.io/projected/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-kube-api-access-gsnls\") pod \"redhat-marketplace-wfkqv\" (UID: \"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf\") " pod="openshift-marketplace/redhat-marketplace-wfkqv" Jan 27 10:30:56 crc kubenswrapper[4869]: I0127 10:30:56.232019 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-utilities\") pod \"redhat-marketplace-wfkqv\" (UID: \"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf\") " pod="openshift-marketplace/redhat-marketplace-wfkqv" Jan 27 10:30:56 crc kubenswrapper[4869]: I0127 10:30:56.232150 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsnls\" (UniqueName: \"kubernetes.io/projected/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-kube-api-access-gsnls\") pod \"redhat-marketplace-wfkqv\" (UID: \"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf\") " pod="openshift-marketplace/redhat-marketplace-wfkqv" Jan 27 10:30:56 crc kubenswrapper[4869]: I0127 10:30:56.232240 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-catalog-content\") pod \"redhat-marketplace-wfkqv\" (UID: \"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf\") " pod="openshift-marketplace/redhat-marketplace-wfkqv" Jan 27 10:30:56 crc kubenswrapper[4869]: I0127 10:30:56.232546 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-utilities\") pod \"redhat-marketplace-wfkqv\" (UID: \"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf\") " pod="openshift-marketplace/redhat-marketplace-wfkqv" Jan 27 10:30:56 crc kubenswrapper[4869]: I0127 10:30:56.232613 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-catalog-content\") pod \"redhat-marketplace-wfkqv\" (UID: \"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf\") " pod="openshift-marketplace/redhat-marketplace-wfkqv" Jan 27 10:30:56 crc kubenswrapper[4869]: I0127 10:30:56.256162 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsnls\" (UniqueName: \"kubernetes.io/projected/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-kube-api-access-gsnls\") pod \"redhat-marketplace-wfkqv\" (UID: \"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf\") " pod="openshift-marketplace/redhat-marketplace-wfkqv" Jan 27 10:30:56 crc kubenswrapper[4869]: I0127 10:30:56.308696 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfkqv" Jan 27 10:30:56 crc kubenswrapper[4869]: I0127 10:30:56.729815 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfkqv"] Jan 27 10:30:57 crc kubenswrapper[4869]: I0127 10:30:57.320268 4869 generic.go:334] "Generic (PLEG): container finished" podID="9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf" containerID="7b17be189efdfadb2ddb27d601aef3309936f20fa9a69ad9a85c81e675c9b3d3" exitCode=0 Jan 27 10:30:57 crc kubenswrapper[4869]: I0127 10:30:57.320336 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfkqv" event={"ID":"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf","Type":"ContainerDied","Data":"7b17be189efdfadb2ddb27d601aef3309936f20fa9a69ad9a85c81e675c9b3d3"} Jan 27 10:30:57 crc kubenswrapper[4869]: I0127 10:30:57.320587 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfkqv" event={"ID":"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf","Type":"ContainerStarted","Data":"f1517d727d343fc2004c3629ffe96d77ad210d018b67240ce2a6bfa4ff8594d1"} Jan 27 10:30:58 crc kubenswrapper[4869]: I0127 10:30:58.340066 4869 generic.go:334] "Generic (PLEG): container finished" podID="9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf" containerID="43679125888b8b7b40e2bd663c242fa245216a3a24549d9d7bea3d4982944ef6" exitCode=0 Jan 27 10:30:58 crc kubenswrapper[4869]: I0127 10:30:58.340256 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfkqv" event={"ID":"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf","Type":"ContainerDied","Data":"43679125888b8b7b40e2bd663c242fa245216a3a24549d9d7bea3d4982944ef6"} Jan 27 10:30:59 crc kubenswrapper[4869]: I0127 10:30:59.349037 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfkqv" event={"ID":"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf","Type":"ContainerStarted","Data":"cea7286228b5adc23c3897e7bc4e2f6e023159e91c41e8b456775e9ea5af2c16"} Jan 27 10:30:59 crc kubenswrapper[4869]: I0127 10:30:59.367325 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wfkqv" podStartSLOduration=2.945999386 podStartE2EDuration="4.367302982s" podCreationTimestamp="2026-01-27 10:30:55 +0000 UTC" firstStartedPulling="2026-01-27 10:30:57.321713759 +0000 UTC m=+2225.942137852" lastFinishedPulling="2026-01-27 10:30:58.743017355 +0000 UTC m=+2227.363441448" observedRunningTime="2026-01-27 10:30:59.364813724 +0000 UTC m=+2227.985237817" watchObservedRunningTime="2026-01-27 10:30:59.367302982 +0000 UTC m=+2227.987727075" Jan 27 10:31:01 crc kubenswrapper[4869]: I0127 10:31:01.052492 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:31:01 crc kubenswrapper[4869]: I0127 10:31:01.365252 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerStarted","Data":"d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac"} Jan 27 10:31:01 crc kubenswrapper[4869]: I0127 10:31:01.365794 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:31:05 crc kubenswrapper[4869]: I0127 10:31:05.397627 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" exitCode=0 Jan 27 10:31:05 crc kubenswrapper[4869]: I0127 10:31:05.397705 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerDied","Data":"d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac"} Jan 27 10:31:05 crc kubenswrapper[4869]: I0127 10:31:05.398141 4869 scope.go:117] "RemoveContainer" containerID="74689d393f42217bbfccf8eae59aecb164d8bf0da9b9873586a322d411f51089" Jan 27 10:31:05 crc kubenswrapper[4869]: I0127 10:31:05.398750 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:31:05 crc kubenswrapper[4869]: E0127 10:31:05.398992 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:31:06 crc kubenswrapper[4869]: I0127 10:31:06.309790 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wfkqv" Jan 27 10:31:06 crc kubenswrapper[4869]: I0127 10:31:06.309915 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wfkqv" Jan 27 10:31:06 crc kubenswrapper[4869]: I0127 10:31:06.354092 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wfkqv" Jan 27 10:31:06 crc kubenswrapper[4869]: I0127 10:31:06.452062 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wfkqv" Jan 27 10:31:06 crc kubenswrapper[4869]: I0127 10:31:06.596031 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfkqv"] Jan 27 10:31:07 crc kubenswrapper[4869]: I0127 10:31:07.033097 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:31:07 crc kubenswrapper[4869]: E0127 10:31:07.033462 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:31:08 crc kubenswrapper[4869]: I0127 10:31:08.419691 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wfkqv" podUID="9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf" containerName="registry-server" containerID="cri-o://cea7286228b5adc23c3897e7bc4e2f6e023159e91c41e8b456775e9ea5af2c16" gracePeriod=2 Jan 27 10:31:08 crc kubenswrapper[4869]: I0127 10:31:08.833932 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfkqv" Jan 27 10:31:08 crc kubenswrapper[4869]: I0127 10:31:08.943206 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-catalog-content\") pod \"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf\" (UID: \"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf\") " Jan 27 10:31:08 crc kubenswrapper[4869]: I0127 10:31:08.943276 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsnls\" (UniqueName: \"kubernetes.io/projected/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-kube-api-access-gsnls\") pod \"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf\" (UID: \"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf\") " Jan 27 10:31:08 crc kubenswrapper[4869]: I0127 10:31:08.943380 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-utilities\") pod \"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf\" (UID: \"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf\") " Jan 27 10:31:08 crc kubenswrapper[4869]: I0127 10:31:08.945307 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-utilities" (OuterVolumeSpecName: "utilities") pod "9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf" (UID: "9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:31:08 crc kubenswrapper[4869]: I0127 10:31:08.949499 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-kube-api-access-gsnls" (OuterVolumeSpecName: "kube-api-access-gsnls") pod "9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf" (UID: "9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf"). InnerVolumeSpecName "kube-api-access-gsnls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:31:08 crc kubenswrapper[4869]: I0127 10:31:08.966940 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf" (UID: "9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.045766 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.045801 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsnls\" (UniqueName: \"kubernetes.io/projected/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-kube-api-access-gsnls\") on node \"crc\" DevicePath \"\"" Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.045816 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.431203 4869 generic.go:334] "Generic (PLEG): container finished" podID="9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf" containerID="cea7286228b5adc23c3897e7bc4e2f6e023159e91c41e8b456775e9ea5af2c16" exitCode=0 Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.431294 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfkqv" Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.431290 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfkqv" event={"ID":"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf","Type":"ContainerDied","Data":"cea7286228b5adc23c3897e7bc4e2f6e023159e91c41e8b456775e9ea5af2c16"} Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.432138 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfkqv" event={"ID":"9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf","Type":"ContainerDied","Data":"f1517d727d343fc2004c3629ffe96d77ad210d018b67240ce2a6bfa4ff8594d1"} Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.432165 4869 scope.go:117] "RemoveContainer" containerID="cea7286228b5adc23c3897e7bc4e2f6e023159e91c41e8b456775e9ea5af2c16" Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.470511 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfkqv"] Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.476464 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfkqv"] Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.479467 4869 scope.go:117] "RemoveContainer" containerID="43679125888b8b7b40e2bd663c242fa245216a3a24549d9d7bea3d4982944ef6" Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.499036 4869 scope.go:117] "RemoveContainer" containerID="7b17be189efdfadb2ddb27d601aef3309936f20fa9a69ad9a85c81e675c9b3d3" Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.548939 4869 scope.go:117] "RemoveContainer" containerID="cea7286228b5adc23c3897e7bc4e2f6e023159e91c41e8b456775e9ea5af2c16" Jan 27 10:31:09 crc kubenswrapper[4869]: E0127 10:31:09.549538 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea7286228b5adc23c3897e7bc4e2f6e023159e91c41e8b456775e9ea5af2c16\": container with ID starting with cea7286228b5adc23c3897e7bc4e2f6e023159e91c41e8b456775e9ea5af2c16 not found: ID does not exist" containerID="cea7286228b5adc23c3897e7bc4e2f6e023159e91c41e8b456775e9ea5af2c16" Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.549600 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea7286228b5adc23c3897e7bc4e2f6e023159e91c41e8b456775e9ea5af2c16"} err="failed to get container status \"cea7286228b5adc23c3897e7bc4e2f6e023159e91c41e8b456775e9ea5af2c16\": rpc error: code = NotFound desc = could not find container \"cea7286228b5adc23c3897e7bc4e2f6e023159e91c41e8b456775e9ea5af2c16\": container with ID starting with cea7286228b5adc23c3897e7bc4e2f6e023159e91c41e8b456775e9ea5af2c16 not found: ID does not exist" Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.549627 4869 scope.go:117] "RemoveContainer" containerID="43679125888b8b7b40e2bd663c242fa245216a3a24549d9d7bea3d4982944ef6" Jan 27 10:31:09 crc kubenswrapper[4869]: E0127 10:31:09.550428 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43679125888b8b7b40e2bd663c242fa245216a3a24549d9d7bea3d4982944ef6\": container with ID starting with 43679125888b8b7b40e2bd663c242fa245216a3a24549d9d7bea3d4982944ef6 not found: ID does not exist" containerID="43679125888b8b7b40e2bd663c242fa245216a3a24549d9d7bea3d4982944ef6" Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.550465 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43679125888b8b7b40e2bd663c242fa245216a3a24549d9d7bea3d4982944ef6"} err="failed to get container status \"43679125888b8b7b40e2bd663c242fa245216a3a24549d9d7bea3d4982944ef6\": rpc error: code = NotFound desc = could not find container \"43679125888b8b7b40e2bd663c242fa245216a3a24549d9d7bea3d4982944ef6\": container with ID starting with 43679125888b8b7b40e2bd663c242fa245216a3a24549d9d7bea3d4982944ef6 not found: ID does not exist" Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.550502 4869 scope.go:117] "RemoveContainer" containerID="7b17be189efdfadb2ddb27d601aef3309936f20fa9a69ad9a85c81e675c9b3d3" Jan 27 10:31:09 crc kubenswrapper[4869]: E0127 10:31:09.550794 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b17be189efdfadb2ddb27d601aef3309936f20fa9a69ad9a85c81e675c9b3d3\": container with ID starting with 7b17be189efdfadb2ddb27d601aef3309936f20fa9a69ad9a85c81e675c9b3d3 not found: ID does not exist" containerID="7b17be189efdfadb2ddb27d601aef3309936f20fa9a69ad9a85c81e675c9b3d3" Jan 27 10:31:09 crc kubenswrapper[4869]: I0127 10:31:09.550819 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b17be189efdfadb2ddb27d601aef3309936f20fa9a69ad9a85c81e675c9b3d3"} err="failed to get container status \"7b17be189efdfadb2ddb27d601aef3309936f20fa9a69ad9a85c81e675c9b3d3\": rpc error: code = NotFound desc = could not find container \"7b17be189efdfadb2ddb27d601aef3309936f20fa9a69ad9a85c81e675c9b3d3\": container with ID starting with 7b17be189efdfadb2ddb27d601aef3309936f20fa9a69ad9a85c81e675c9b3d3 not found: ID does not exist" Jan 27 10:31:10 crc kubenswrapper[4869]: I0127 10:31:10.049607 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf" path="/var/lib/kubelet/pods/9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf/volumes" Jan 27 10:31:15 crc kubenswrapper[4869]: I0127 10:31:15.697842 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:31:15 crc kubenswrapper[4869]: I0127 10:31:15.698442 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:31:19 crc kubenswrapper[4869]: I0127 10:31:19.033089 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:31:19 crc kubenswrapper[4869]: I0127 10:31:19.033520 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:31:19 crc kubenswrapper[4869]: E0127 10:31:19.033758 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:31:19 crc kubenswrapper[4869]: I0127 10:31:19.508141 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerStarted","Data":"89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80"} Jan 27 10:31:19 crc kubenswrapper[4869]: I0127 10:31:19.509396 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.192244 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tpzwx"] Jan 27 10:31:20 crc kubenswrapper[4869]: E0127 10:31:20.194905 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf" containerName="registry-server" Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.195066 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf" containerName="registry-server" Jan 27 10:31:20 crc kubenswrapper[4869]: E0127 10:31:20.195199 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf" containerName="extract-content" Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.195305 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf" containerName="extract-content" Jan 27 10:31:20 crc kubenswrapper[4869]: E0127 10:31:20.195476 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf" containerName="extract-utilities" Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.195579 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf" containerName="extract-utilities" Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.196326 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fc2ad21-aeaf-4f5a-9a06-42dbab1b7aaf" containerName="registry-server" Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.198527 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpzwx" Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.201570 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tpzwx"] Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.352877 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vmlq\" (UniqueName: \"kubernetes.io/projected/ec5ef27b-032b-402b-97bf-bb3b340ceccb-kube-api-access-5vmlq\") pod \"community-operators-tpzwx\" (UID: \"ec5ef27b-032b-402b-97bf-bb3b340ceccb\") " pod="openshift-marketplace/community-operators-tpzwx" Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.352945 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec5ef27b-032b-402b-97bf-bb3b340ceccb-catalog-content\") pod \"community-operators-tpzwx\" (UID: \"ec5ef27b-032b-402b-97bf-bb3b340ceccb\") " pod="openshift-marketplace/community-operators-tpzwx" Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.353060 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec5ef27b-032b-402b-97bf-bb3b340ceccb-utilities\") pod \"community-operators-tpzwx\" (UID: \"ec5ef27b-032b-402b-97bf-bb3b340ceccb\") " pod="openshift-marketplace/community-operators-tpzwx" Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.454966 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec5ef27b-032b-402b-97bf-bb3b340ceccb-catalog-content\") pod \"community-operators-tpzwx\" (UID: \"ec5ef27b-032b-402b-97bf-bb3b340ceccb\") " pod="openshift-marketplace/community-operators-tpzwx" Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.455057 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec5ef27b-032b-402b-97bf-bb3b340ceccb-utilities\") pod \"community-operators-tpzwx\" (UID: \"ec5ef27b-032b-402b-97bf-bb3b340ceccb\") " pod="openshift-marketplace/community-operators-tpzwx" Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.455149 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vmlq\" (UniqueName: \"kubernetes.io/projected/ec5ef27b-032b-402b-97bf-bb3b340ceccb-kube-api-access-5vmlq\") pod \"community-operators-tpzwx\" (UID: \"ec5ef27b-032b-402b-97bf-bb3b340ceccb\") " pod="openshift-marketplace/community-operators-tpzwx" Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.455484 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec5ef27b-032b-402b-97bf-bb3b340ceccb-catalog-content\") pod \"community-operators-tpzwx\" (UID: \"ec5ef27b-032b-402b-97bf-bb3b340ceccb\") " pod="openshift-marketplace/community-operators-tpzwx" Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.455882 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec5ef27b-032b-402b-97bf-bb3b340ceccb-utilities\") pod \"community-operators-tpzwx\" (UID: \"ec5ef27b-032b-402b-97bf-bb3b340ceccb\") " pod="openshift-marketplace/community-operators-tpzwx" Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.482374 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vmlq\" (UniqueName: \"kubernetes.io/projected/ec5ef27b-032b-402b-97bf-bb3b340ceccb-kube-api-access-5vmlq\") pod \"community-operators-tpzwx\" (UID: \"ec5ef27b-032b-402b-97bf-bb3b340ceccb\") " pod="openshift-marketplace/community-operators-tpzwx" Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.518419 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpzwx" Jan 27 10:31:20 crc kubenswrapper[4869]: I0127 10:31:20.871374 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tpzwx"] Jan 27 10:31:20 crc kubenswrapper[4869]: W0127 10:31:20.875951 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec5ef27b_032b_402b_97bf_bb3b340ceccb.slice/crio-d8d3557af963058549c3e0055a54313fb7ac60a33e9412d962693dc0947b471d WatchSource:0}: Error finding container d8d3557af963058549c3e0055a54313fb7ac60a33e9412d962693dc0947b471d: Status 404 returned error can't find the container with id d8d3557af963058549c3e0055a54313fb7ac60a33e9412d962693dc0947b471d Jan 27 10:31:21 crc kubenswrapper[4869]: I0127 10:31:21.522501 4869 generic.go:334] "Generic (PLEG): container finished" podID="ec5ef27b-032b-402b-97bf-bb3b340ceccb" containerID="44f1a8904918e1faf6784e1312f8d4eeb60fad8297b22921d53fbacba8519c48" exitCode=0 Jan 27 10:31:21 crc kubenswrapper[4869]: I0127 10:31:21.522594 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpzwx" event={"ID":"ec5ef27b-032b-402b-97bf-bb3b340ceccb","Type":"ContainerDied","Data":"44f1a8904918e1faf6784e1312f8d4eeb60fad8297b22921d53fbacba8519c48"} Jan 27 10:31:21 crc kubenswrapper[4869]: I0127 10:31:21.522759 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpzwx" event={"ID":"ec5ef27b-032b-402b-97bf-bb3b340ceccb","Type":"ContainerStarted","Data":"d8d3557af963058549c3e0055a54313fb7ac60a33e9412d962693dc0947b471d"} Jan 27 10:31:23 crc kubenswrapper[4869]: I0127 10:31:23.558735 4869 generic.go:334] "Generic (PLEG): container finished" podID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" exitCode=0 Jan 27 10:31:23 crc kubenswrapper[4869]: I0127 10:31:23.558816 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerDied","Data":"89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80"} Jan 27 10:31:23 crc kubenswrapper[4869]: I0127 10:31:23.559132 4869 scope.go:117] "RemoveContainer" containerID="9d70ae4bbb53e64cbf5af181bd54dc0bca940793a43a662acb8c8f9c002fe7f5" Jan 27 10:31:23 crc kubenswrapper[4869]: I0127 10:31:23.559885 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:31:23 crc kubenswrapper[4869]: E0127 10:31:23.560132 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:31:25 crc kubenswrapper[4869]: I0127 10:31:25.573704 4869 generic.go:334] "Generic (PLEG): container finished" podID="ec5ef27b-032b-402b-97bf-bb3b340ceccb" containerID="dabd95afe9e570aaad218cb7695c9b3a54fff741d147c561d748b04ebc251af0" exitCode=0 Jan 27 10:31:25 crc kubenswrapper[4869]: I0127 10:31:25.573743 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpzwx" event={"ID":"ec5ef27b-032b-402b-97bf-bb3b340ceccb","Type":"ContainerDied","Data":"dabd95afe9e570aaad218cb7695c9b3a54fff741d147c561d748b04ebc251af0"} Jan 27 10:31:26 crc kubenswrapper[4869]: I0127 10:31:26.585543 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpzwx" event={"ID":"ec5ef27b-032b-402b-97bf-bb3b340ceccb","Type":"ContainerStarted","Data":"d9899207f59b7b34941587762fef03c6825b1fe597c2ef554bf56e8238a38929"} Jan 27 10:31:26 crc kubenswrapper[4869]: I0127 10:31:26.607670 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tpzwx" podStartSLOduration=2.167402348 podStartE2EDuration="6.607651153s" podCreationTimestamp="2026-01-27 10:31:20 +0000 UTC" firstStartedPulling="2026-01-27 10:31:21.524340032 +0000 UTC m=+2250.144764115" lastFinishedPulling="2026-01-27 10:31:25.964588837 +0000 UTC m=+2254.585012920" observedRunningTime="2026-01-27 10:31:26.602610095 +0000 UTC m=+2255.223034188" watchObservedRunningTime="2026-01-27 10:31:26.607651153 +0000 UTC m=+2255.228075236" Jan 27 10:31:30 crc kubenswrapper[4869]: I0127 10:31:30.519194 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tpzwx" Jan 27 10:31:30 crc kubenswrapper[4869]: I0127 10:31:30.519554 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tpzwx" Jan 27 10:31:30 crc kubenswrapper[4869]: I0127 10:31:30.565955 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tpzwx" Jan 27 10:31:31 crc kubenswrapper[4869]: I0127 10:31:31.681573 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tpzwx" Jan 27 10:31:31 crc kubenswrapper[4869]: I0127 10:31:31.764932 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tpzwx"] Jan 27 10:31:31 crc kubenswrapper[4869]: I0127 10:31:31.812080 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jnkxt"] Jan 27 10:31:31 crc kubenswrapper[4869]: I0127 10:31:31.812321 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jnkxt" podUID="9051fa8e-7223-46e5-b408-a806a99c45c2" containerName="registry-server" containerID="cri-o://d9065fefaf7f97f7e472ca7f0e34a6caffb87f5a3d0eed76a64ced55c830fa75" gracePeriod=2 Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.196973 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jnkxt" Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.352283 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9051fa8e-7223-46e5-b408-a806a99c45c2-catalog-content\") pod \"9051fa8e-7223-46e5-b408-a806a99c45c2\" (UID: \"9051fa8e-7223-46e5-b408-a806a99c45c2\") " Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.352675 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9051fa8e-7223-46e5-b408-a806a99c45c2-utilities\") pod \"9051fa8e-7223-46e5-b408-a806a99c45c2\" (UID: \"9051fa8e-7223-46e5-b408-a806a99c45c2\") " Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.352732 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp4px\" (UniqueName: \"kubernetes.io/projected/9051fa8e-7223-46e5-b408-a806a99c45c2-kube-api-access-lp4px\") pod \"9051fa8e-7223-46e5-b408-a806a99c45c2\" (UID: \"9051fa8e-7223-46e5-b408-a806a99c45c2\") " Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.353223 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9051fa8e-7223-46e5-b408-a806a99c45c2-utilities" (OuterVolumeSpecName: "utilities") pod "9051fa8e-7223-46e5-b408-a806a99c45c2" (UID: "9051fa8e-7223-46e5-b408-a806a99c45c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.360512 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9051fa8e-7223-46e5-b408-a806a99c45c2-kube-api-access-lp4px" (OuterVolumeSpecName: "kube-api-access-lp4px") pod "9051fa8e-7223-46e5-b408-a806a99c45c2" (UID: "9051fa8e-7223-46e5-b408-a806a99c45c2"). InnerVolumeSpecName "kube-api-access-lp4px". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.404487 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9051fa8e-7223-46e5-b408-a806a99c45c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9051fa8e-7223-46e5-b408-a806a99c45c2" (UID: "9051fa8e-7223-46e5-b408-a806a99c45c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.454232 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9051fa8e-7223-46e5-b408-a806a99c45c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.454272 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp4px\" (UniqueName: \"kubernetes.io/projected/9051fa8e-7223-46e5-b408-a806a99c45c2-kube-api-access-lp4px\") on node \"crc\" DevicePath \"\"" Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.454284 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9051fa8e-7223-46e5-b408-a806a99c45c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.627026 4869 generic.go:334] "Generic (PLEG): container finished" podID="9051fa8e-7223-46e5-b408-a806a99c45c2" containerID="d9065fefaf7f97f7e472ca7f0e34a6caffb87f5a3d0eed76a64ced55c830fa75" exitCode=0 Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.627107 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jnkxt" event={"ID":"9051fa8e-7223-46e5-b408-a806a99c45c2","Type":"ContainerDied","Data":"d9065fefaf7f97f7e472ca7f0e34a6caffb87f5a3d0eed76a64ced55c830fa75"} Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.627160 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jnkxt" event={"ID":"9051fa8e-7223-46e5-b408-a806a99c45c2","Type":"ContainerDied","Data":"b7739e045f2d015abc01d0657f2f10f304daa6756e9c4e46e7e7502d017e0e00"} Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.627177 4869 scope.go:117] "RemoveContainer" containerID="d9065fefaf7f97f7e472ca7f0e34a6caffb87f5a3d0eed76a64ced55c830fa75" Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.627732 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jnkxt" Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.644110 4869 scope.go:117] "RemoveContainer" containerID="f69e7f6aaf174f684c37bc91408e95a075aa3f4422eb3d50fe9df65fbc4f4736" Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.680216 4869 scope.go:117] "RemoveContainer" containerID="16aa166edde7a89581a53a60925ceba4bf393d36b76d061f77d84e758ccc1462" Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.705012 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jnkxt"] Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.713057 4869 scope.go:117] "RemoveContainer" containerID="d9065fefaf7f97f7e472ca7f0e34a6caffb87f5a3d0eed76a64ced55c830fa75" Jan 27 10:31:32 crc kubenswrapper[4869]: E0127 10:31:32.722626 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9065fefaf7f97f7e472ca7f0e34a6caffb87f5a3d0eed76a64ced55c830fa75\": container with ID starting with d9065fefaf7f97f7e472ca7f0e34a6caffb87f5a3d0eed76a64ced55c830fa75 not found: ID does not exist" containerID="d9065fefaf7f97f7e472ca7f0e34a6caffb87f5a3d0eed76a64ced55c830fa75" Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.722693 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9065fefaf7f97f7e472ca7f0e34a6caffb87f5a3d0eed76a64ced55c830fa75"} err="failed to get container status \"d9065fefaf7f97f7e472ca7f0e34a6caffb87f5a3d0eed76a64ced55c830fa75\": rpc error: code = NotFound desc = could not find container \"d9065fefaf7f97f7e472ca7f0e34a6caffb87f5a3d0eed76a64ced55c830fa75\": container with ID starting with d9065fefaf7f97f7e472ca7f0e34a6caffb87f5a3d0eed76a64ced55c830fa75 not found: ID does not exist" Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.722717 4869 scope.go:117] "RemoveContainer" containerID="f69e7f6aaf174f684c37bc91408e95a075aa3f4422eb3d50fe9df65fbc4f4736" Jan 27 10:31:32 crc kubenswrapper[4869]: E0127 10:31:32.725002 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f69e7f6aaf174f684c37bc91408e95a075aa3f4422eb3d50fe9df65fbc4f4736\": container with ID starting with f69e7f6aaf174f684c37bc91408e95a075aa3f4422eb3d50fe9df65fbc4f4736 not found: ID does not exist" containerID="f69e7f6aaf174f684c37bc91408e95a075aa3f4422eb3d50fe9df65fbc4f4736" Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.725131 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f69e7f6aaf174f684c37bc91408e95a075aa3f4422eb3d50fe9df65fbc4f4736"} err="failed to get container status \"f69e7f6aaf174f684c37bc91408e95a075aa3f4422eb3d50fe9df65fbc4f4736\": rpc error: code = NotFound desc = could not find container \"f69e7f6aaf174f684c37bc91408e95a075aa3f4422eb3d50fe9df65fbc4f4736\": container with ID starting with f69e7f6aaf174f684c37bc91408e95a075aa3f4422eb3d50fe9df65fbc4f4736 not found: ID does not exist" Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.725205 4869 scope.go:117] "RemoveContainer" containerID="16aa166edde7a89581a53a60925ceba4bf393d36b76d061f77d84e758ccc1462" Jan 27 10:31:32 crc kubenswrapper[4869]: E0127 10:31:32.725557 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16aa166edde7a89581a53a60925ceba4bf393d36b76d061f77d84e758ccc1462\": container with ID starting with 16aa166edde7a89581a53a60925ceba4bf393d36b76d061f77d84e758ccc1462 not found: ID does not exist" containerID="16aa166edde7a89581a53a60925ceba4bf393d36b76d061f77d84e758ccc1462" Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.725649 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16aa166edde7a89581a53a60925ceba4bf393d36b76d061f77d84e758ccc1462"} err="failed to get container status \"16aa166edde7a89581a53a60925ceba4bf393d36b76d061f77d84e758ccc1462\": rpc error: code = NotFound desc = could not find container \"16aa166edde7a89581a53a60925ceba4bf393d36b76d061f77d84e758ccc1462\": container with ID starting with 16aa166edde7a89581a53a60925ceba4bf393d36b76d061f77d84e758ccc1462 not found: ID does not exist" Jan 27 10:31:32 crc kubenswrapper[4869]: I0127 10:31:32.727360 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jnkxt"] Jan 27 10:31:33 crc kubenswrapper[4869]: I0127 10:31:33.033821 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:31:33 crc kubenswrapper[4869]: E0127 10:31:33.034070 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:31:34 crc kubenswrapper[4869]: I0127 10:31:34.042988 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9051fa8e-7223-46e5-b408-a806a99c45c2" path="/var/lib/kubelet/pods/9051fa8e-7223-46e5-b408-a806a99c45c2/volumes" Jan 27 10:31:36 crc kubenswrapper[4869]: I0127 10:31:36.033409 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:31:36 crc kubenswrapper[4869]: E0127 10:31:36.033748 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:31:45 crc kubenswrapper[4869]: I0127 10:31:45.033271 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:31:45 crc kubenswrapper[4869]: E0127 10:31:45.035290 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:31:45 crc kubenswrapper[4869]: I0127 10:31:45.698409 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:31:45 crc kubenswrapper[4869]: I0127 10:31:45.698507 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:31:45 crc kubenswrapper[4869]: I0127 10:31:45.698569 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 10:31:45 crc kubenswrapper[4869]: I0127 10:31:45.699603 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490"} pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:31:45 crc kubenswrapper[4869]: I0127 10:31:45.699714 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" containerID="cri-o://64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" gracePeriod=600 Jan 27 10:31:45 crc kubenswrapper[4869]: E0127 10:31:45.825388 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:31:46 crc kubenswrapper[4869]: I0127 10:31:46.736304 4869 generic.go:334] "Generic (PLEG): container finished" podID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" exitCode=0 Jan 27 10:31:46 crc kubenswrapper[4869]: I0127 10:31:46.736369 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerDied","Data":"64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490"} Jan 27 10:31:46 crc kubenswrapper[4869]: I0127 10:31:46.736423 4869 scope.go:117] "RemoveContainer" containerID="4bcba44fbcb50f71e5672c0ef23b355f7a8a7dd5428b67fc78b24dacaac2e337" Jan 27 10:31:46 crc kubenswrapper[4869]: I0127 10:31:46.737038 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:31:46 crc kubenswrapper[4869]: E0127 10:31:46.737332 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:31:49 crc kubenswrapper[4869]: I0127 10:31:49.033619 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:31:49 crc kubenswrapper[4869]: E0127 10:31:49.034156 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:31:58 crc kubenswrapper[4869]: I0127 10:31:58.033940 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:31:58 crc kubenswrapper[4869]: E0127 10:31:58.035125 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:31:59 crc kubenswrapper[4869]: I0127 10:31:59.032641 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:31:59 crc kubenswrapper[4869]: E0127 10:31:59.033290 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:32:03 crc kubenswrapper[4869]: I0127 10:32:03.032663 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:32:03 crc kubenswrapper[4869]: E0127 10:32:03.033261 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:32:10 crc kubenswrapper[4869]: I0127 10:32:10.033261 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:32:10 crc kubenswrapper[4869]: E0127 10:32:10.034042 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:32:12 crc kubenswrapper[4869]: I0127 10:32:12.037018 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:32:12 crc kubenswrapper[4869]: E0127 10:32:12.037373 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:32:16 crc kubenswrapper[4869]: I0127 10:32:16.033007 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:32:16 crc kubenswrapper[4869]: E0127 10:32:16.033721 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:32:21 crc kubenswrapper[4869]: I0127 10:32:21.033372 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:32:21 crc kubenswrapper[4869]: E0127 10:32:21.034081 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:32:23 crc kubenswrapper[4869]: I0127 10:32:23.033683 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:32:23 crc kubenswrapper[4869]: E0127 10:32:23.034330 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:32:29 crc kubenswrapper[4869]: I0127 10:32:29.033498 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:32:29 crc kubenswrapper[4869]: E0127 10:32:29.034363 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:32:32 crc kubenswrapper[4869]: I0127 10:32:32.037199 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:32:32 crc kubenswrapper[4869]: E0127 10:32:32.037986 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:32:35 crc kubenswrapper[4869]: I0127 10:32:35.033662 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:32:35 crc kubenswrapper[4869]: E0127 10:32:35.034201 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:32:40 crc kubenswrapper[4869]: I0127 10:32:40.033681 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:32:40 crc kubenswrapper[4869]: E0127 10:32:40.034178 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:32:45 crc kubenswrapper[4869]: I0127 10:32:45.033185 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:32:45 crc kubenswrapper[4869]: E0127 10:32:45.034120 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:32:49 crc kubenswrapper[4869]: I0127 10:32:49.033302 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:32:49 crc kubenswrapper[4869]: E0127 10:32:49.034199 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:32:54 crc kubenswrapper[4869]: I0127 10:32:54.034135 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:32:54 crc kubenswrapper[4869]: E0127 10:32:54.034605 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:33:00 crc kubenswrapper[4869]: I0127 10:33:00.034435 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:33:00 crc kubenswrapper[4869]: I0127 10:33:00.035059 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:33:00 crc kubenswrapper[4869]: E0127 10:33:00.035131 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:33:00 crc kubenswrapper[4869]: E0127 10:33:00.035456 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:33:07 crc kubenswrapper[4869]: I0127 10:33:07.033077 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:33:07 crc kubenswrapper[4869]: E0127 10:33:07.033660 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:33:12 crc kubenswrapper[4869]: I0127 10:33:12.042223 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:33:12 crc kubenswrapper[4869]: E0127 10:33:12.042818 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:33:15 crc kubenswrapper[4869]: I0127 10:33:15.033625 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:33:15 crc kubenswrapper[4869]: E0127 10:33:15.034386 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:33:19 crc kubenswrapper[4869]: I0127 10:33:19.033877 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:33:19 crc kubenswrapper[4869]: E0127 10:33:19.035728 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:33:25 crc kubenswrapper[4869]: I0127 10:33:25.033859 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:33:25 crc kubenswrapper[4869]: E0127 10:33:25.034978 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:33:30 crc kubenswrapper[4869]: I0127 10:33:30.034573 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:33:30 crc kubenswrapper[4869]: E0127 10:33:30.035670 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:33:32 crc kubenswrapper[4869]: I0127 10:33:32.038317 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:33:32 crc kubenswrapper[4869]: E0127 10:33:32.038920 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:33:40 crc kubenswrapper[4869]: I0127 10:33:40.033820 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:33:40 crc kubenswrapper[4869]: E0127 10:33:40.034982 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:33:44 crc kubenswrapper[4869]: I0127 10:33:44.034264 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:33:44 crc kubenswrapper[4869]: E0127 10:33:44.035113 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:33:45 crc kubenswrapper[4869]: I0127 10:33:45.033945 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:33:45 crc kubenswrapper[4869]: E0127 10:33:45.034441 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:33:55 crc kubenswrapper[4869]: I0127 10:33:55.033525 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:33:55 crc kubenswrapper[4869]: I0127 10:33:55.034188 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:33:55 crc kubenswrapper[4869]: E0127 10:33:55.034343 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:33:55 crc kubenswrapper[4869]: E0127 10:33:55.034611 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:33:58 crc kubenswrapper[4869]: I0127 10:33:58.033507 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:33:58 crc kubenswrapper[4869]: E0127 10:33:58.034073 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:34:07 crc kubenswrapper[4869]: I0127 10:34:07.033000 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:34:07 crc kubenswrapper[4869]: E0127 10:34:07.033760 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:34:09 crc kubenswrapper[4869]: I0127 10:34:09.033615 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:34:09 crc kubenswrapper[4869]: E0127 10:34:09.034208 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:34:13 crc kubenswrapper[4869]: I0127 10:34:13.033371 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:34:13 crc kubenswrapper[4869]: E0127 10:34:13.033866 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:34:21 crc kubenswrapper[4869]: I0127 10:34:21.033212 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:34:21 crc kubenswrapper[4869]: E0127 10:34:21.034529 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:34:22 crc kubenswrapper[4869]: I0127 10:34:22.042352 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:34:22 crc kubenswrapper[4869]: E0127 10:34:22.042860 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:34:24 crc kubenswrapper[4869]: I0127 10:34:24.033641 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:34:24 crc kubenswrapper[4869]: E0127 10:34:24.034296 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:34:34 crc kubenswrapper[4869]: I0127 10:34:34.033745 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:34:34 crc kubenswrapper[4869]: E0127 10:34:34.034625 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:34:35 crc kubenswrapper[4869]: I0127 10:34:35.033959 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:34:35 crc kubenswrapper[4869]: E0127 10:34:35.034641 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:34:37 crc kubenswrapper[4869]: I0127 10:34:37.033453 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:34:37 crc kubenswrapper[4869]: E0127 10:34:37.034078 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:34:48 crc kubenswrapper[4869]: I0127 10:34:48.033409 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:34:48 crc kubenswrapper[4869]: E0127 10:34:48.034480 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:34:49 crc kubenswrapper[4869]: I0127 10:34:49.033278 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:34:49 crc kubenswrapper[4869]: E0127 10:34:49.033693 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:34:52 crc kubenswrapper[4869]: I0127 10:34:52.045313 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:34:52 crc kubenswrapper[4869]: E0127 10:34:52.046074 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:35:00 crc kubenswrapper[4869]: I0127 10:35:00.033557 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:35:00 crc kubenswrapper[4869]: E0127 10:35:00.034545 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:35:01 crc kubenswrapper[4869]: I0127 10:35:01.034480 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:35:01 crc kubenswrapper[4869]: E0127 10:35:01.034882 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:35:07 crc kubenswrapper[4869]: I0127 10:35:07.033036 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:35:07 crc kubenswrapper[4869]: E0127 10:35:07.033918 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:35:11 crc kubenswrapper[4869]: I0127 10:35:11.034087 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:35:11 crc kubenswrapper[4869]: E0127 10:35:11.034655 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:35:14 crc kubenswrapper[4869]: I0127 10:35:14.034229 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:35:14 crc kubenswrapper[4869]: E0127 10:35:14.035118 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:35:20 crc kubenswrapper[4869]: I0127 10:35:20.033798 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:35:20 crc kubenswrapper[4869]: E0127 10:35:20.035224 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:35:22 crc kubenswrapper[4869]: I0127 10:35:22.037793 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:35:22 crc kubenswrapper[4869]: E0127 10:35:22.038436 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:35:27 crc kubenswrapper[4869]: I0127 10:35:27.032994 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:35:27 crc kubenswrapper[4869]: E0127 10:35:27.033628 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:35:32 crc kubenswrapper[4869]: I0127 10:35:32.042908 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:35:32 crc kubenswrapper[4869]: E0127 10:35:32.043897 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:35:37 crc kubenswrapper[4869]: I0127 10:35:37.032694 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:35:37 crc kubenswrapper[4869]: E0127 10:35:37.032987 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:35:42 crc kubenswrapper[4869]: I0127 10:35:42.037860 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:35:42 crc kubenswrapper[4869]: E0127 10:35:42.038573 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:35:43 crc kubenswrapper[4869]: I0127 10:35:43.033690 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:35:43 crc kubenswrapper[4869]: E0127 10:35:43.033962 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:35:48 crc kubenswrapper[4869]: I0127 10:35:48.033815 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:35:48 crc kubenswrapper[4869]: E0127 10:35:48.034893 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:35:53 crc kubenswrapper[4869]: I0127 10:35:53.033278 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:35:53 crc kubenswrapper[4869]: E0127 10:35:53.033776 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:35:54 crc kubenswrapper[4869]: I0127 10:35:54.033651 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:35:54 crc kubenswrapper[4869]: E0127 10:35:54.034211 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:36:00 crc kubenswrapper[4869]: I0127 10:36:00.034005 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:36:00 crc kubenswrapper[4869]: E0127 10:36:00.035298 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:36:05 crc kubenswrapper[4869]: I0127 10:36:05.033235 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:36:05 crc kubenswrapper[4869]: I0127 10:36:05.034110 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:36:05 crc kubenswrapper[4869]: E0127 10:36:05.034347 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:36:05 crc kubenswrapper[4869]: E0127 10:36:05.034453 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:36:12 crc kubenswrapper[4869]: I0127 10:36:12.042586 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:36:12 crc kubenswrapper[4869]: E0127 10:36:12.043271 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:36:18 crc kubenswrapper[4869]: I0127 10:36:18.033044 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:36:18 crc kubenswrapper[4869]: E0127 10:36:18.033771 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:36:19 crc kubenswrapper[4869]: I0127 10:36:19.033401 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:36:20 crc kubenswrapper[4869]: I0127 10:36:20.083394 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerStarted","Data":"a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d"} Jan 27 10:36:20 crc kubenswrapper[4869]: I0127 10:36:20.084374 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:36:24 crc kubenswrapper[4869]: I0127 10:36:24.125871 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" exitCode=0 Jan 27 10:36:24 crc kubenswrapper[4869]: I0127 10:36:24.126122 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerDied","Data":"a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d"} Jan 27 10:36:24 crc kubenswrapper[4869]: I0127 10:36:24.126375 4869 scope.go:117] "RemoveContainer" containerID="d8f8cf0aead234394cbf4fb21b85d99012b710f777a9a7270d37255edf722cac" Jan 27 10:36:24 crc kubenswrapper[4869]: I0127 10:36:24.127118 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:36:24 crc kubenswrapper[4869]: E0127 10:36:24.127627 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:36:25 crc kubenswrapper[4869]: I0127 10:36:25.033748 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:36:25 crc kubenswrapper[4869]: E0127 10:36:25.035003 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:36:32 crc kubenswrapper[4869]: I0127 10:36:32.043783 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:36:33 crc kubenswrapper[4869]: I0127 10:36:33.208145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerStarted","Data":"b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb"} Jan 27 10:36:33 crc kubenswrapper[4869]: I0127 10:36:33.209001 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 10:36:37 crc kubenswrapper[4869]: I0127 10:36:37.247142 4869 generic.go:334] "Generic (PLEG): container finished" podID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" exitCode=0 Jan 27 10:36:37 crc kubenswrapper[4869]: I0127 10:36:37.247268 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerDied","Data":"b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb"} Jan 27 10:36:37 crc kubenswrapper[4869]: I0127 10:36:37.247457 4869 scope.go:117] "RemoveContainer" containerID="89504d9c6621d019929f36a14232d03826c6464915aa494ad9b47c1453984e80" Jan 27 10:36:37 crc kubenswrapper[4869]: I0127 10:36:37.248720 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:36:37 crc kubenswrapper[4869]: E0127 10:36:37.249210 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:36:39 crc kubenswrapper[4869]: I0127 10:36:39.033379 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:36:39 crc kubenswrapper[4869]: E0127 10:36:39.034235 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:36:40 crc kubenswrapper[4869]: I0127 10:36:40.033502 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:36:40 crc kubenswrapper[4869]: E0127 10:36:40.033953 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:36:48 crc kubenswrapper[4869]: I0127 10:36:48.033684 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:36:48 crc kubenswrapper[4869]: E0127 10:36:48.034643 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:36:53 crc kubenswrapper[4869]: I0127 10:36:53.034571 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:36:53 crc kubenswrapper[4869]: E0127 10:36:53.035406 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:36:54 crc kubenswrapper[4869]: I0127 10:36:54.033740 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:36:54 crc kubenswrapper[4869]: I0127 10:36:54.408843 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerStarted","Data":"f26f1fb2cee5006f1f75a2a2a614b9386b95a957b8a625a62d67f3bf0077c924"} Jan 27 10:37:02 crc kubenswrapper[4869]: I0127 10:37:02.037513 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:37:02 crc kubenswrapper[4869]: E0127 10:37:02.038441 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:37:04 crc kubenswrapper[4869]: I0127 10:37:04.033615 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:37:04 crc kubenswrapper[4869]: E0127 10:37:04.034246 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:37:13 crc kubenswrapper[4869]: I0127 10:37:13.033308 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:37:13 crc kubenswrapper[4869]: E0127 10:37:13.034247 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:37:15 crc kubenswrapper[4869]: I0127 10:37:15.033566 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:37:15 crc kubenswrapper[4869]: E0127 10:37:15.034345 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:37:27 crc kubenswrapper[4869]: I0127 10:37:27.033287 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:37:27 crc kubenswrapper[4869]: E0127 10:37:27.034747 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:37:28 crc kubenswrapper[4869]: I0127 10:37:28.034091 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:37:28 crc kubenswrapper[4869]: E0127 10:37:28.034819 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:37:40 crc kubenswrapper[4869]: I0127 10:37:40.034373 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:37:40 crc kubenswrapper[4869]: E0127 10:37:40.035365 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:37:42 crc kubenswrapper[4869]: I0127 10:37:42.042240 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:37:42 crc kubenswrapper[4869]: E0127 10:37:42.043354 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:37:51 crc kubenswrapper[4869]: I0127 10:37:51.033671 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:37:51 crc kubenswrapper[4869]: E0127 10:37:51.034551 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:37:56 crc kubenswrapper[4869]: I0127 10:37:56.034600 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:37:56 crc kubenswrapper[4869]: E0127 10:37:56.035826 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:38:06 crc kubenswrapper[4869]: I0127 10:38:06.033779 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:38:06 crc kubenswrapper[4869]: E0127 10:38:06.034512 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:38:07 crc kubenswrapper[4869]: I0127 10:38:07.033228 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:38:07 crc kubenswrapper[4869]: E0127 10:38:07.033760 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:38:20 crc kubenswrapper[4869]: I0127 10:38:20.032944 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:38:20 crc kubenswrapper[4869]: E0127 10:38:20.033710 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:38:21 crc kubenswrapper[4869]: I0127 10:38:21.033056 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:38:21 crc kubenswrapper[4869]: E0127 10:38:21.033357 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:38:26 crc kubenswrapper[4869]: I0127 10:38:26.818195 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qlp6v"] Jan 27 10:38:26 crc kubenswrapper[4869]: E0127 10:38:26.819191 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9051fa8e-7223-46e5-b408-a806a99c45c2" containerName="extract-utilities" Jan 27 10:38:26 crc kubenswrapper[4869]: I0127 10:38:26.819211 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9051fa8e-7223-46e5-b408-a806a99c45c2" containerName="extract-utilities" Jan 27 10:38:26 crc kubenswrapper[4869]: E0127 10:38:26.819239 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9051fa8e-7223-46e5-b408-a806a99c45c2" containerName="extract-content" Jan 27 10:38:26 crc kubenswrapper[4869]: I0127 10:38:26.819248 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9051fa8e-7223-46e5-b408-a806a99c45c2" containerName="extract-content" Jan 27 10:38:26 crc kubenswrapper[4869]: E0127 10:38:26.819271 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9051fa8e-7223-46e5-b408-a806a99c45c2" containerName="registry-server" Jan 27 10:38:26 crc kubenswrapper[4869]: I0127 10:38:26.819279 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9051fa8e-7223-46e5-b408-a806a99c45c2" containerName="registry-server" Jan 27 10:38:26 crc kubenswrapper[4869]: I0127 10:38:26.819451 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9051fa8e-7223-46e5-b408-a806a99c45c2" containerName="registry-server" Jan 27 10:38:26 crc kubenswrapper[4869]: I0127 10:38:26.820624 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qlp6v" Jan 27 10:38:26 crc kubenswrapper[4869]: I0127 10:38:26.830115 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qlp6v"] Jan 27 10:38:26 crc kubenswrapper[4869]: I0127 10:38:26.951293 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eadff3a0-aaea-41ca-8eca-349320b5b56c-utilities\") pod \"certified-operators-qlp6v\" (UID: \"eadff3a0-aaea-41ca-8eca-349320b5b56c\") " pod="openshift-marketplace/certified-operators-qlp6v" Jan 27 10:38:26 crc kubenswrapper[4869]: I0127 10:38:26.951432 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsbbh\" (UniqueName: \"kubernetes.io/projected/eadff3a0-aaea-41ca-8eca-349320b5b56c-kube-api-access-hsbbh\") pod \"certified-operators-qlp6v\" (UID: \"eadff3a0-aaea-41ca-8eca-349320b5b56c\") " pod="openshift-marketplace/certified-operators-qlp6v" Jan 27 10:38:26 crc kubenswrapper[4869]: I0127 10:38:26.951508 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eadff3a0-aaea-41ca-8eca-349320b5b56c-catalog-content\") pod \"certified-operators-qlp6v\" (UID: \"eadff3a0-aaea-41ca-8eca-349320b5b56c\") " pod="openshift-marketplace/certified-operators-qlp6v" Jan 27 10:38:27 crc kubenswrapper[4869]: I0127 10:38:27.053570 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eadff3a0-aaea-41ca-8eca-349320b5b56c-catalog-content\") pod \"certified-operators-qlp6v\" (UID: \"eadff3a0-aaea-41ca-8eca-349320b5b56c\") " pod="openshift-marketplace/certified-operators-qlp6v" Jan 27 10:38:27 crc kubenswrapper[4869]: I0127 10:38:27.053723 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eadff3a0-aaea-41ca-8eca-349320b5b56c-utilities\") pod \"certified-operators-qlp6v\" (UID: \"eadff3a0-aaea-41ca-8eca-349320b5b56c\") " pod="openshift-marketplace/certified-operators-qlp6v" Jan 27 10:38:27 crc kubenswrapper[4869]: I0127 10:38:27.053758 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsbbh\" (UniqueName: \"kubernetes.io/projected/eadff3a0-aaea-41ca-8eca-349320b5b56c-kube-api-access-hsbbh\") pod \"certified-operators-qlp6v\" (UID: \"eadff3a0-aaea-41ca-8eca-349320b5b56c\") " pod="openshift-marketplace/certified-operators-qlp6v" Jan 27 10:38:27 crc kubenswrapper[4869]: I0127 10:38:27.054234 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eadff3a0-aaea-41ca-8eca-349320b5b56c-catalog-content\") pod \"certified-operators-qlp6v\" (UID: \"eadff3a0-aaea-41ca-8eca-349320b5b56c\") " pod="openshift-marketplace/certified-operators-qlp6v" Jan 27 10:38:27 crc kubenswrapper[4869]: I0127 10:38:27.054288 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eadff3a0-aaea-41ca-8eca-349320b5b56c-utilities\") pod \"certified-operators-qlp6v\" (UID: \"eadff3a0-aaea-41ca-8eca-349320b5b56c\") " pod="openshift-marketplace/certified-operators-qlp6v" Jan 27 10:38:27 crc kubenswrapper[4869]: I0127 10:38:27.074022 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsbbh\" (UniqueName: \"kubernetes.io/projected/eadff3a0-aaea-41ca-8eca-349320b5b56c-kube-api-access-hsbbh\") pod \"certified-operators-qlp6v\" (UID: \"eadff3a0-aaea-41ca-8eca-349320b5b56c\") " pod="openshift-marketplace/certified-operators-qlp6v" Jan 27 10:38:27 crc kubenswrapper[4869]: I0127 10:38:27.146718 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qlp6v" Jan 27 10:38:27 crc kubenswrapper[4869]: I0127 10:38:27.599488 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qlp6v"] Jan 27 10:38:28 crc kubenswrapper[4869]: I0127 10:38:28.222339 4869 generic.go:334] "Generic (PLEG): container finished" podID="eadff3a0-aaea-41ca-8eca-349320b5b56c" containerID="b7545a6e0a63e3958b21624eb63ed5fbddc73df9fd65fde620a83c42f00b8c7f" exitCode=0 Jan 27 10:38:28 crc kubenswrapper[4869]: I0127 10:38:28.222384 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qlp6v" event={"ID":"eadff3a0-aaea-41ca-8eca-349320b5b56c","Type":"ContainerDied","Data":"b7545a6e0a63e3958b21624eb63ed5fbddc73df9fd65fde620a83c42f00b8c7f"} Jan 27 10:38:28 crc kubenswrapper[4869]: I0127 10:38:28.222413 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qlp6v" event={"ID":"eadff3a0-aaea-41ca-8eca-349320b5b56c","Type":"ContainerStarted","Data":"508092340e2fa30c804060cf8868394b3112bed562608195e825409a01e3fcde"} Jan 27 10:38:28 crc kubenswrapper[4869]: I0127 10:38:28.228441 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 10:38:32 crc kubenswrapper[4869]: I0127 10:38:32.042163 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:38:32 crc kubenswrapper[4869]: E0127 10:38:32.043206 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:38:33 crc kubenswrapper[4869]: I0127 10:38:33.267502 4869 generic.go:334] "Generic (PLEG): container finished" podID="eadff3a0-aaea-41ca-8eca-349320b5b56c" containerID="0b30615b24008f88a3ae3bea065a3e61de3de031892fe9a1bfe1b4266df43ff7" exitCode=0 Jan 27 10:38:33 crc kubenswrapper[4869]: I0127 10:38:33.267571 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qlp6v" event={"ID":"eadff3a0-aaea-41ca-8eca-349320b5b56c","Type":"ContainerDied","Data":"0b30615b24008f88a3ae3bea065a3e61de3de031892fe9a1bfe1b4266df43ff7"} Jan 27 10:38:34 crc kubenswrapper[4869]: I0127 10:38:34.284476 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qlp6v" event={"ID":"eadff3a0-aaea-41ca-8eca-349320b5b56c","Type":"ContainerStarted","Data":"9b3fecb2525e5b156fd5313f6077ba1d6739b450caafd4b88364e4573edd0d91"} Jan 27 10:38:34 crc kubenswrapper[4869]: I0127 10:38:34.354458 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qlp6v" podStartSLOduration=2.90868696 podStartE2EDuration="8.354426438s" podCreationTimestamp="2026-01-27 10:38:26 +0000 UTC" firstStartedPulling="2026-01-27 10:38:28.22813683 +0000 UTC m=+2676.848560923" lastFinishedPulling="2026-01-27 10:38:33.673876318 +0000 UTC m=+2682.294300401" observedRunningTime="2026-01-27 10:38:34.317790165 +0000 UTC m=+2682.938214288" watchObservedRunningTime="2026-01-27 10:38:34.354426438 +0000 UTC m=+2682.974850541" Jan 27 10:38:35 crc kubenswrapper[4869]: I0127 10:38:35.032674 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:38:35 crc kubenswrapper[4869]: E0127 10:38:35.033015 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:38:37 crc kubenswrapper[4869]: I0127 10:38:37.147757 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qlp6v" Jan 27 10:38:37 crc kubenswrapper[4869]: I0127 10:38:37.148153 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qlp6v" Jan 27 10:38:37 crc kubenswrapper[4869]: I0127 10:38:37.223576 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qlp6v" Jan 27 10:38:44 crc kubenswrapper[4869]: I0127 10:38:44.033983 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:38:44 crc kubenswrapper[4869]: E0127 10:38:44.034443 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:38:47 crc kubenswrapper[4869]: I0127 10:38:47.190605 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qlp6v" Jan 27 10:38:47 crc kubenswrapper[4869]: I0127 10:38:47.434183 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qlp6v"] Jan 27 10:38:48 crc kubenswrapper[4869]: I0127 10:38:48.000376 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hhg6m"] Jan 27 10:38:48 crc kubenswrapper[4869]: I0127 10:38:48.000683 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hhg6m" podUID="4ab43b25-6ea3-4061-9c4a-6fb427539d3c" containerName="registry-server" containerID="cri-o://859280bf72fdff4b2100b04ec58d42521b3003789367e4d530c50a873f457fb0" gracePeriod=2 Jan 27 10:38:48 crc kubenswrapper[4869]: I0127 10:38:48.036040 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:38:48 crc kubenswrapper[4869]: E0127 10:38:48.036647 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:38:48 crc kubenswrapper[4869]: I0127 10:38:48.413909 4869 generic.go:334] "Generic (PLEG): container finished" podID="4ab43b25-6ea3-4061-9c4a-6fb427539d3c" containerID="859280bf72fdff4b2100b04ec58d42521b3003789367e4d530c50a873f457fb0" exitCode=0 Jan 27 10:38:48 crc kubenswrapper[4869]: I0127 10:38:48.413983 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhg6m" event={"ID":"4ab43b25-6ea3-4061-9c4a-6fb427539d3c","Type":"ContainerDied","Data":"859280bf72fdff4b2100b04ec58d42521b3003789367e4d530c50a873f457fb0"} Jan 27 10:38:48 crc kubenswrapper[4869]: I0127 10:38:48.414552 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hhg6m" event={"ID":"4ab43b25-6ea3-4061-9c4a-6fb427539d3c","Type":"ContainerDied","Data":"d8339f6b3731afa0f7f82161db5709e48014e75eab327b6c319e053f17d038ed"} Jan 27 10:38:48 crc kubenswrapper[4869]: I0127 10:38:48.414606 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8339f6b3731afa0f7f82161db5709e48014e75eab327b6c319e053f17d038ed" Jan 27 10:38:48 crc kubenswrapper[4869]: I0127 10:38:48.483225 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hhg6m" Jan 27 10:38:48 crc kubenswrapper[4869]: I0127 10:38:48.572750 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-utilities\") pod \"4ab43b25-6ea3-4061-9c4a-6fb427539d3c\" (UID: \"4ab43b25-6ea3-4061-9c4a-6fb427539d3c\") " Jan 27 10:38:48 crc kubenswrapper[4869]: I0127 10:38:48.573071 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-catalog-content\") pod \"4ab43b25-6ea3-4061-9c4a-6fb427539d3c\" (UID: \"4ab43b25-6ea3-4061-9c4a-6fb427539d3c\") " Jan 27 10:38:48 crc kubenswrapper[4869]: I0127 10:38:48.573195 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmn9j\" (UniqueName: \"kubernetes.io/projected/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-kube-api-access-vmn9j\") pod \"4ab43b25-6ea3-4061-9c4a-6fb427539d3c\" (UID: \"4ab43b25-6ea3-4061-9c4a-6fb427539d3c\") " Jan 27 10:38:48 crc kubenswrapper[4869]: I0127 10:38:48.574632 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-utilities" (OuterVolumeSpecName: "utilities") pod "4ab43b25-6ea3-4061-9c4a-6fb427539d3c" (UID: "4ab43b25-6ea3-4061-9c4a-6fb427539d3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:38:48 crc kubenswrapper[4869]: I0127 10:38:48.584373 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-kube-api-access-vmn9j" (OuterVolumeSpecName: "kube-api-access-vmn9j") pod "4ab43b25-6ea3-4061-9c4a-6fb427539d3c" (UID: "4ab43b25-6ea3-4061-9c4a-6fb427539d3c"). InnerVolumeSpecName "kube-api-access-vmn9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:38:48 crc kubenswrapper[4869]: I0127 10:38:48.622795 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ab43b25-6ea3-4061-9c4a-6fb427539d3c" (UID: "4ab43b25-6ea3-4061-9c4a-6fb427539d3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:38:48 crc kubenswrapper[4869]: I0127 10:38:48.675691 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:38:48 crc kubenswrapper[4869]: I0127 10:38:48.675734 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmn9j\" (UniqueName: \"kubernetes.io/projected/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-kube-api-access-vmn9j\") on node \"crc\" DevicePath \"\"" Jan 27 10:38:48 crc kubenswrapper[4869]: I0127 10:38:48.675749 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab43b25-6ea3-4061-9c4a-6fb427539d3c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:38:49 crc kubenswrapper[4869]: I0127 10:38:49.421891 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hhg6m" Jan 27 10:38:49 crc kubenswrapper[4869]: I0127 10:38:49.453405 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hhg6m"] Jan 27 10:38:49 crc kubenswrapper[4869]: I0127 10:38:49.466563 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hhg6m"] Jan 27 10:38:50 crc kubenswrapper[4869]: I0127 10:38:50.043931 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ab43b25-6ea3-4061-9c4a-6fb427539d3c" path="/var/lib/kubelet/pods/4ab43b25-6ea3-4061-9c4a-6fb427539d3c/volumes" Jan 27 10:38:54 crc kubenswrapper[4869]: I0127 10:38:54.416731 4869 scope.go:117] "RemoveContainer" containerID="62050a7d5d7288948279a8de8f96e19b1c8dc0a6354e01f169572131345cafe0" Jan 27 10:38:54 crc kubenswrapper[4869]: I0127 10:38:54.442397 4869 scope.go:117] "RemoveContainer" containerID="4acc83b116e18792de3b92e104a1773260360831ca3d74107941934a3c0fe741" Jan 27 10:38:54 crc kubenswrapper[4869]: I0127 10:38:54.477278 4869 scope.go:117] "RemoveContainer" containerID="859280bf72fdff4b2100b04ec58d42521b3003789367e4d530c50a873f457fb0" Jan 27 10:38:56 crc kubenswrapper[4869]: I0127 10:38:56.033719 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:38:56 crc kubenswrapper[4869]: E0127 10:38:56.035046 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:38:59 crc kubenswrapper[4869]: I0127 10:38:59.033160 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:38:59 crc kubenswrapper[4869]: E0127 10:38:59.033809 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:39:08 crc kubenswrapper[4869]: I0127 10:39:08.034517 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:39:08 crc kubenswrapper[4869]: E0127 10:39:08.036897 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:39:13 crc kubenswrapper[4869]: I0127 10:39:13.036945 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:39:13 crc kubenswrapper[4869]: E0127 10:39:13.038436 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:39:15 crc kubenswrapper[4869]: I0127 10:39:15.697658 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:39:15 crc kubenswrapper[4869]: I0127 10:39:15.698097 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:39:22 crc kubenswrapper[4869]: I0127 10:39:22.039560 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:39:22 crc kubenswrapper[4869]: E0127 10:39:22.040494 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:39:28 crc kubenswrapper[4869]: I0127 10:39:28.032654 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:39:28 crc kubenswrapper[4869]: E0127 10:39:28.033306 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:39:31 crc kubenswrapper[4869]: I0127 10:39:31.410981 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-smctz/must-gather-gm4xt"] Jan 27 10:39:31 crc kubenswrapper[4869]: E0127 10:39:31.412096 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab43b25-6ea3-4061-9c4a-6fb427539d3c" containerName="extract-content" Jan 27 10:39:31 crc kubenswrapper[4869]: I0127 10:39:31.412115 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab43b25-6ea3-4061-9c4a-6fb427539d3c" containerName="extract-content" Jan 27 10:39:31 crc kubenswrapper[4869]: E0127 10:39:31.412136 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab43b25-6ea3-4061-9c4a-6fb427539d3c" containerName="registry-server" Jan 27 10:39:31 crc kubenswrapper[4869]: I0127 10:39:31.412143 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab43b25-6ea3-4061-9c4a-6fb427539d3c" containerName="registry-server" Jan 27 10:39:31 crc kubenswrapper[4869]: E0127 10:39:31.412165 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab43b25-6ea3-4061-9c4a-6fb427539d3c" containerName="extract-utilities" Jan 27 10:39:31 crc kubenswrapper[4869]: I0127 10:39:31.412175 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab43b25-6ea3-4061-9c4a-6fb427539d3c" containerName="extract-utilities" Jan 27 10:39:31 crc kubenswrapper[4869]: I0127 10:39:31.412386 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab43b25-6ea3-4061-9c4a-6fb427539d3c" containerName="registry-server" Jan 27 10:39:31 crc kubenswrapper[4869]: I0127 10:39:31.413376 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smctz/must-gather-gm4xt" Jan 27 10:39:31 crc kubenswrapper[4869]: I0127 10:39:31.415015 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-smctz"/"default-dockercfg-w9jqw" Jan 27 10:39:31 crc kubenswrapper[4869]: I0127 10:39:31.418321 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-smctz"/"openshift-service-ca.crt" Jan 27 10:39:31 crc kubenswrapper[4869]: I0127 10:39:31.418902 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-smctz"/"kube-root-ca.crt" Jan 27 10:39:31 crc kubenswrapper[4869]: I0127 10:39:31.448524 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-smctz/must-gather-gm4xt"] Jan 27 10:39:31 crc kubenswrapper[4869]: I0127 10:39:31.449701 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwlgr\" (UniqueName: \"kubernetes.io/projected/293e3afd-0b77-490d-88bc-56f06235a889-kube-api-access-cwlgr\") pod \"must-gather-gm4xt\" (UID: \"293e3afd-0b77-490d-88bc-56f06235a889\") " pod="openshift-must-gather-smctz/must-gather-gm4xt" Jan 27 10:39:31 crc kubenswrapper[4869]: I0127 10:39:31.449755 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/293e3afd-0b77-490d-88bc-56f06235a889-must-gather-output\") pod \"must-gather-gm4xt\" (UID: \"293e3afd-0b77-490d-88bc-56f06235a889\") " pod="openshift-must-gather-smctz/must-gather-gm4xt" Jan 27 10:39:31 crc kubenswrapper[4869]: I0127 10:39:31.551006 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwlgr\" (UniqueName: \"kubernetes.io/projected/293e3afd-0b77-490d-88bc-56f06235a889-kube-api-access-cwlgr\") pod \"must-gather-gm4xt\" (UID: \"293e3afd-0b77-490d-88bc-56f06235a889\") " pod="openshift-must-gather-smctz/must-gather-gm4xt" Jan 27 10:39:31 crc kubenswrapper[4869]: I0127 10:39:31.551079 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/293e3afd-0b77-490d-88bc-56f06235a889-must-gather-output\") pod \"must-gather-gm4xt\" (UID: \"293e3afd-0b77-490d-88bc-56f06235a889\") " pod="openshift-must-gather-smctz/must-gather-gm4xt" Jan 27 10:39:31 crc kubenswrapper[4869]: I0127 10:39:31.551591 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/293e3afd-0b77-490d-88bc-56f06235a889-must-gather-output\") pod \"must-gather-gm4xt\" (UID: \"293e3afd-0b77-490d-88bc-56f06235a889\") " pod="openshift-must-gather-smctz/must-gather-gm4xt" Jan 27 10:39:31 crc kubenswrapper[4869]: I0127 10:39:31.575292 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwlgr\" (UniqueName: \"kubernetes.io/projected/293e3afd-0b77-490d-88bc-56f06235a889-kube-api-access-cwlgr\") pod \"must-gather-gm4xt\" (UID: \"293e3afd-0b77-490d-88bc-56f06235a889\") " pod="openshift-must-gather-smctz/must-gather-gm4xt" Jan 27 10:39:31 crc kubenswrapper[4869]: I0127 10:39:31.730413 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smctz/must-gather-gm4xt" Jan 27 10:39:32 crc kubenswrapper[4869]: I0127 10:39:32.225007 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-smctz/must-gather-gm4xt"] Jan 27 10:39:32 crc kubenswrapper[4869]: I0127 10:39:32.832147 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smctz/must-gather-gm4xt" event={"ID":"293e3afd-0b77-490d-88bc-56f06235a889","Type":"ContainerStarted","Data":"95cbbfe7679b9bb66640268e4ae91c9822354f90b62d45c2fb195ec1f4ebc5e4"} Jan 27 10:39:37 crc kubenswrapper[4869]: I0127 10:39:37.036054 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:39:37 crc kubenswrapper[4869]: E0127 10:39:37.036607 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:39:39 crc kubenswrapper[4869]: I0127 10:39:39.034237 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:39:39 crc kubenswrapper[4869]: E0127 10:39:39.036397 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:39:39 crc kubenswrapper[4869]: I0127 10:39:39.885972 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smctz/must-gather-gm4xt" event={"ID":"293e3afd-0b77-490d-88bc-56f06235a889","Type":"ContainerStarted","Data":"b92be076f1038507985c02ab883776ca3fe5ff24cdc4929261e1448696df239a"} Jan 27 10:39:39 crc kubenswrapper[4869]: I0127 10:39:39.886025 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smctz/must-gather-gm4xt" event={"ID":"293e3afd-0b77-490d-88bc-56f06235a889","Type":"ContainerStarted","Data":"0141cd4f3e2cb3559b8bbc901e639f25c8fd1a07de3896f1c2d198ecdcfc5bc7"} Jan 27 10:39:39 crc kubenswrapper[4869]: I0127 10:39:39.903007 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-smctz/must-gather-gm4xt" podStartSLOduration=1.980826845 podStartE2EDuration="8.902987267s" podCreationTimestamp="2026-01-27 10:39:31 +0000 UTC" firstStartedPulling="2026-01-27 10:39:32.244127384 +0000 UTC m=+2740.864551477" lastFinishedPulling="2026-01-27 10:39:39.166287806 +0000 UTC m=+2747.786711899" observedRunningTime="2026-01-27 10:39:39.898448756 +0000 UTC m=+2748.518872839" watchObservedRunningTime="2026-01-27 10:39:39.902987267 +0000 UTC m=+2748.523411360" Jan 27 10:39:40 crc kubenswrapper[4869]: I0127 10:39:40.121926 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-smctz/crc-debug-jdtfp"] Jan 27 10:39:40 crc kubenswrapper[4869]: I0127 10:39:40.123079 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smctz/crc-debug-jdtfp" Jan 27 10:39:40 crc kubenswrapper[4869]: I0127 10:39:40.223296 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49lgg\" (UniqueName: \"kubernetes.io/projected/a0acbc3d-4f3e-465a-8c1e-45a181706ab7-kube-api-access-49lgg\") pod \"crc-debug-jdtfp\" (UID: \"a0acbc3d-4f3e-465a-8c1e-45a181706ab7\") " pod="openshift-must-gather-smctz/crc-debug-jdtfp" Jan 27 10:39:40 crc kubenswrapper[4869]: I0127 10:39:40.223357 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0acbc3d-4f3e-465a-8c1e-45a181706ab7-host\") pod \"crc-debug-jdtfp\" (UID: \"a0acbc3d-4f3e-465a-8c1e-45a181706ab7\") " pod="openshift-must-gather-smctz/crc-debug-jdtfp" Jan 27 10:39:40 crc kubenswrapper[4869]: I0127 10:39:40.324951 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49lgg\" (UniqueName: \"kubernetes.io/projected/a0acbc3d-4f3e-465a-8c1e-45a181706ab7-kube-api-access-49lgg\") pod \"crc-debug-jdtfp\" (UID: \"a0acbc3d-4f3e-465a-8c1e-45a181706ab7\") " pod="openshift-must-gather-smctz/crc-debug-jdtfp" Jan 27 10:39:40 crc kubenswrapper[4869]: I0127 10:39:40.325033 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0acbc3d-4f3e-465a-8c1e-45a181706ab7-host\") pod \"crc-debug-jdtfp\" (UID: \"a0acbc3d-4f3e-465a-8c1e-45a181706ab7\") " pod="openshift-must-gather-smctz/crc-debug-jdtfp" Jan 27 10:39:40 crc kubenswrapper[4869]: I0127 10:39:40.325222 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0acbc3d-4f3e-465a-8c1e-45a181706ab7-host\") pod \"crc-debug-jdtfp\" (UID: \"a0acbc3d-4f3e-465a-8c1e-45a181706ab7\") " pod="openshift-must-gather-smctz/crc-debug-jdtfp" Jan 27 10:39:40 crc kubenswrapper[4869]: I0127 10:39:40.353424 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49lgg\" (UniqueName: \"kubernetes.io/projected/a0acbc3d-4f3e-465a-8c1e-45a181706ab7-kube-api-access-49lgg\") pod \"crc-debug-jdtfp\" (UID: \"a0acbc3d-4f3e-465a-8c1e-45a181706ab7\") " pod="openshift-must-gather-smctz/crc-debug-jdtfp" Jan 27 10:39:40 crc kubenswrapper[4869]: I0127 10:39:40.442519 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smctz/crc-debug-jdtfp" Jan 27 10:39:40 crc kubenswrapper[4869]: W0127 10:39:40.471984 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0acbc3d_4f3e_465a_8c1e_45a181706ab7.slice/crio-c179344d7706f20d117b0042fd3eb69c85fdae0236495f908ae3a541c2ffee3e WatchSource:0}: Error finding container c179344d7706f20d117b0042fd3eb69c85fdae0236495f908ae3a541c2ffee3e: Status 404 returned error can't find the container with id c179344d7706f20d117b0042fd3eb69c85fdae0236495f908ae3a541c2ffee3e Jan 27 10:39:40 crc kubenswrapper[4869]: I0127 10:39:40.895959 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smctz/crc-debug-jdtfp" event={"ID":"a0acbc3d-4f3e-465a-8c1e-45a181706ab7","Type":"ContainerStarted","Data":"c179344d7706f20d117b0042fd3eb69c85fdae0236495f908ae3a541c2ffee3e"} Jan 27 10:39:45 crc kubenswrapper[4869]: I0127 10:39:45.697705 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:39:45 crc kubenswrapper[4869]: I0127 10:39:45.698291 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:39:52 crc kubenswrapper[4869]: I0127 10:39:52.039227 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:39:52 crc kubenswrapper[4869]: E0127 10:39:52.039810 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:39:52 crc kubenswrapper[4869]: I0127 10:39:52.040060 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:39:52 crc kubenswrapper[4869]: E0127 10:39:52.044231 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:39:53 crc kubenswrapper[4869]: I0127 10:39:53.359277 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smctz/crc-debug-jdtfp" event={"ID":"a0acbc3d-4f3e-465a-8c1e-45a181706ab7","Type":"ContainerStarted","Data":"f59bf12eb64cbc6b710c3cd82e75305dcae07e3e0bb1a5f60c96284ae23f39a8"} Jan 27 10:39:53 crc kubenswrapper[4869]: I0127 10:39:53.374481 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-smctz/crc-debug-jdtfp" podStartSLOduration=1.411164544 podStartE2EDuration="13.374460017s" podCreationTimestamp="2026-01-27 10:39:40 +0000 UTC" firstStartedPulling="2026-01-27 10:39:40.475944728 +0000 UTC m=+2749.096368811" lastFinishedPulling="2026-01-27 10:39:52.439240201 +0000 UTC m=+2761.059664284" observedRunningTime="2026-01-27 10:39:53.371062822 +0000 UTC m=+2761.991486905" watchObservedRunningTime="2026-01-27 10:39:53.374460017 +0000 UTC m=+2761.994884100" Jan 27 10:40:03 crc kubenswrapper[4869]: I0127 10:40:03.033236 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:40:03 crc kubenswrapper[4869]: E0127 10:40:03.035277 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:40:07 crc kubenswrapper[4869]: I0127 10:40:07.033267 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:40:07 crc kubenswrapper[4869]: E0127 10:40:07.034034 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:40:10 crc kubenswrapper[4869]: E0127 10:40:10.766516 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0acbc3d_4f3e_465a_8c1e_45a181706ab7.slice/crio-f59bf12eb64cbc6b710c3cd82e75305dcae07e3e0bb1a5f60c96284ae23f39a8.scope\": RecentStats: unable to find data in memory cache]" Jan 27 10:40:11 crc kubenswrapper[4869]: I0127 10:40:11.491401 4869 generic.go:334] "Generic (PLEG): container finished" podID="a0acbc3d-4f3e-465a-8c1e-45a181706ab7" containerID="f59bf12eb64cbc6b710c3cd82e75305dcae07e3e0bb1a5f60c96284ae23f39a8" exitCode=0 Jan 27 10:40:11 crc kubenswrapper[4869]: I0127 10:40:11.491482 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smctz/crc-debug-jdtfp" event={"ID":"a0acbc3d-4f3e-465a-8c1e-45a181706ab7","Type":"ContainerDied","Data":"f59bf12eb64cbc6b710c3cd82e75305dcae07e3e0bb1a5f60c96284ae23f39a8"} Jan 27 10:40:12 crc kubenswrapper[4869]: I0127 10:40:12.578340 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smctz/crc-debug-jdtfp" Jan 27 10:40:12 crc kubenswrapper[4869]: I0127 10:40:12.630928 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-smctz/crc-debug-jdtfp"] Jan 27 10:40:12 crc kubenswrapper[4869]: I0127 10:40:12.640938 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-smctz/crc-debug-jdtfp"] Jan 27 10:40:12 crc kubenswrapper[4869]: I0127 10:40:12.755493 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49lgg\" (UniqueName: \"kubernetes.io/projected/a0acbc3d-4f3e-465a-8c1e-45a181706ab7-kube-api-access-49lgg\") pod \"a0acbc3d-4f3e-465a-8c1e-45a181706ab7\" (UID: \"a0acbc3d-4f3e-465a-8c1e-45a181706ab7\") " Jan 27 10:40:12 crc kubenswrapper[4869]: I0127 10:40:12.755639 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0acbc3d-4f3e-465a-8c1e-45a181706ab7-host\") pod \"a0acbc3d-4f3e-465a-8c1e-45a181706ab7\" (UID: \"a0acbc3d-4f3e-465a-8c1e-45a181706ab7\") " Jan 27 10:40:12 crc kubenswrapper[4869]: I0127 10:40:12.755730 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a0acbc3d-4f3e-465a-8c1e-45a181706ab7-host" (OuterVolumeSpecName: "host") pod "a0acbc3d-4f3e-465a-8c1e-45a181706ab7" (UID: "a0acbc3d-4f3e-465a-8c1e-45a181706ab7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:40:12 crc kubenswrapper[4869]: I0127 10:40:12.756234 4869 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a0acbc3d-4f3e-465a-8c1e-45a181706ab7-host\") on node \"crc\" DevicePath \"\"" Jan 27 10:40:12 crc kubenswrapper[4869]: I0127 10:40:12.771064 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0acbc3d-4f3e-465a-8c1e-45a181706ab7-kube-api-access-49lgg" (OuterVolumeSpecName: "kube-api-access-49lgg") pod "a0acbc3d-4f3e-465a-8c1e-45a181706ab7" (UID: "a0acbc3d-4f3e-465a-8c1e-45a181706ab7"). InnerVolumeSpecName "kube-api-access-49lgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:40:12 crc kubenswrapper[4869]: I0127 10:40:12.858297 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49lgg\" (UniqueName: \"kubernetes.io/projected/a0acbc3d-4f3e-465a-8c1e-45a181706ab7-kube-api-access-49lgg\") on node \"crc\" DevicePath \"\"" Jan 27 10:40:13 crc kubenswrapper[4869]: I0127 10:40:13.504382 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c179344d7706f20d117b0042fd3eb69c85fdae0236495f908ae3a541c2ffee3e" Jan 27 10:40:13 crc kubenswrapper[4869]: I0127 10:40:13.504464 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smctz/crc-debug-jdtfp" Jan 27 10:40:13 crc kubenswrapper[4869]: I0127 10:40:13.819791 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-smctz/crc-debug-2l7gk"] Jan 27 10:40:13 crc kubenswrapper[4869]: E0127 10:40:13.820099 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0acbc3d-4f3e-465a-8c1e-45a181706ab7" containerName="container-00" Jan 27 10:40:13 crc kubenswrapper[4869]: I0127 10:40:13.820111 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0acbc3d-4f3e-465a-8c1e-45a181706ab7" containerName="container-00" Jan 27 10:40:13 crc kubenswrapper[4869]: I0127 10:40:13.820289 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0acbc3d-4f3e-465a-8c1e-45a181706ab7" containerName="container-00" Jan 27 10:40:13 crc kubenswrapper[4869]: I0127 10:40:13.820748 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smctz/crc-debug-2l7gk" Jan 27 10:40:13 crc kubenswrapper[4869]: I0127 10:40:13.973500 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q74p2\" (UniqueName: \"kubernetes.io/projected/823da47a-6de6-4877-8210-e464a0271aa8-kube-api-access-q74p2\") pod \"crc-debug-2l7gk\" (UID: \"823da47a-6de6-4877-8210-e464a0271aa8\") " pod="openshift-must-gather-smctz/crc-debug-2l7gk" Jan 27 10:40:13 crc kubenswrapper[4869]: I0127 10:40:13.974456 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/823da47a-6de6-4877-8210-e464a0271aa8-host\") pod \"crc-debug-2l7gk\" (UID: \"823da47a-6de6-4877-8210-e464a0271aa8\") " pod="openshift-must-gather-smctz/crc-debug-2l7gk" Jan 27 10:40:14 crc kubenswrapper[4869]: I0127 10:40:14.043698 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0acbc3d-4f3e-465a-8c1e-45a181706ab7" path="/var/lib/kubelet/pods/a0acbc3d-4f3e-465a-8c1e-45a181706ab7/volumes" Jan 27 10:40:14 crc kubenswrapper[4869]: I0127 10:40:14.076148 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/823da47a-6de6-4877-8210-e464a0271aa8-host\") pod \"crc-debug-2l7gk\" (UID: \"823da47a-6de6-4877-8210-e464a0271aa8\") " pod="openshift-must-gather-smctz/crc-debug-2l7gk" Jan 27 10:40:14 crc kubenswrapper[4869]: I0127 10:40:14.076201 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q74p2\" (UniqueName: \"kubernetes.io/projected/823da47a-6de6-4877-8210-e464a0271aa8-kube-api-access-q74p2\") pod \"crc-debug-2l7gk\" (UID: \"823da47a-6de6-4877-8210-e464a0271aa8\") " pod="openshift-must-gather-smctz/crc-debug-2l7gk" Jan 27 10:40:14 crc kubenswrapper[4869]: I0127 10:40:14.076275 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/823da47a-6de6-4877-8210-e464a0271aa8-host\") pod \"crc-debug-2l7gk\" (UID: \"823da47a-6de6-4877-8210-e464a0271aa8\") " pod="openshift-must-gather-smctz/crc-debug-2l7gk" Jan 27 10:40:14 crc kubenswrapper[4869]: I0127 10:40:14.100190 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q74p2\" (UniqueName: \"kubernetes.io/projected/823da47a-6de6-4877-8210-e464a0271aa8-kube-api-access-q74p2\") pod \"crc-debug-2l7gk\" (UID: \"823da47a-6de6-4877-8210-e464a0271aa8\") " pod="openshift-must-gather-smctz/crc-debug-2l7gk" Jan 27 10:40:14 crc kubenswrapper[4869]: I0127 10:40:14.135207 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smctz/crc-debug-2l7gk" Jan 27 10:40:14 crc kubenswrapper[4869]: W0127 10:40:14.163569 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod823da47a_6de6_4877_8210_e464a0271aa8.slice/crio-08bde1327ff9e91320559b444a76329ec1417fbfe01ce379b745af42c2994c0e WatchSource:0}: Error finding container 08bde1327ff9e91320559b444a76329ec1417fbfe01ce379b745af42c2994c0e: Status 404 returned error can't find the container with id 08bde1327ff9e91320559b444a76329ec1417fbfe01ce379b745af42c2994c0e Jan 27 10:40:14 crc kubenswrapper[4869]: I0127 10:40:14.514139 4869 generic.go:334] "Generic (PLEG): container finished" podID="823da47a-6de6-4877-8210-e464a0271aa8" containerID="6eb7f603c3c7374d5ef8dce97fef4419a14fbf531392f113dcd642e14d4e0348" exitCode=1 Jan 27 10:40:14 crc kubenswrapper[4869]: I0127 10:40:14.514248 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smctz/crc-debug-2l7gk" event={"ID":"823da47a-6de6-4877-8210-e464a0271aa8","Type":"ContainerDied","Data":"6eb7f603c3c7374d5ef8dce97fef4419a14fbf531392f113dcd642e14d4e0348"} Jan 27 10:40:14 crc kubenswrapper[4869]: I0127 10:40:14.514509 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smctz/crc-debug-2l7gk" event={"ID":"823da47a-6de6-4877-8210-e464a0271aa8","Type":"ContainerStarted","Data":"08bde1327ff9e91320559b444a76329ec1417fbfe01ce379b745af42c2994c0e"} Jan 27 10:40:14 crc kubenswrapper[4869]: I0127 10:40:14.558885 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-smctz/crc-debug-2l7gk"] Jan 27 10:40:14 crc kubenswrapper[4869]: I0127 10:40:14.564402 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-smctz/crc-debug-2l7gk"] Jan 27 10:40:15 crc kubenswrapper[4869]: I0127 10:40:15.619254 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smctz/crc-debug-2l7gk" Jan 27 10:40:15 crc kubenswrapper[4869]: I0127 10:40:15.698075 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:40:15 crc kubenswrapper[4869]: I0127 10:40:15.698142 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:40:15 crc kubenswrapper[4869]: I0127 10:40:15.698207 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 10:40:15 crc kubenswrapper[4869]: I0127 10:40:15.699024 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f26f1fb2cee5006f1f75a2a2a614b9386b95a957b8a625a62d67f3bf0077c924"} pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:40:15 crc kubenswrapper[4869]: I0127 10:40:15.699135 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" containerID="cri-o://f26f1fb2cee5006f1f75a2a2a614b9386b95a957b8a625a62d67f3bf0077c924" gracePeriod=600 Jan 27 10:40:15 crc kubenswrapper[4869]: I0127 10:40:15.700625 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q74p2\" (UniqueName: \"kubernetes.io/projected/823da47a-6de6-4877-8210-e464a0271aa8-kube-api-access-q74p2\") pod \"823da47a-6de6-4877-8210-e464a0271aa8\" (UID: \"823da47a-6de6-4877-8210-e464a0271aa8\") " Jan 27 10:40:15 crc kubenswrapper[4869]: I0127 10:40:15.700721 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/823da47a-6de6-4877-8210-e464a0271aa8-host\") pod \"823da47a-6de6-4877-8210-e464a0271aa8\" (UID: \"823da47a-6de6-4877-8210-e464a0271aa8\") " Jan 27 10:40:15 crc kubenswrapper[4869]: I0127 10:40:15.700846 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/823da47a-6de6-4877-8210-e464a0271aa8-host" (OuterVolumeSpecName: "host") pod "823da47a-6de6-4877-8210-e464a0271aa8" (UID: "823da47a-6de6-4877-8210-e464a0271aa8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 10:40:15 crc kubenswrapper[4869]: I0127 10:40:15.701266 4869 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/823da47a-6de6-4877-8210-e464a0271aa8-host\") on node \"crc\" DevicePath \"\"" Jan 27 10:40:15 crc kubenswrapper[4869]: I0127 10:40:15.707046 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/823da47a-6de6-4877-8210-e464a0271aa8-kube-api-access-q74p2" (OuterVolumeSpecName: "kube-api-access-q74p2") pod "823da47a-6de6-4877-8210-e464a0271aa8" (UID: "823da47a-6de6-4877-8210-e464a0271aa8"). InnerVolumeSpecName "kube-api-access-q74p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:40:15 crc kubenswrapper[4869]: I0127 10:40:15.803389 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q74p2\" (UniqueName: \"kubernetes.io/projected/823da47a-6de6-4877-8210-e464a0271aa8-kube-api-access-q74p2\") on node \"crc\" DevicePath \"\"" Jan 27 10:40:16 crc kubenswrapper[4869]: I0127 10:40:16.034401 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:40:16 crc kubenswrapper[4869]: E0127 10:40:16.034863 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:40:16 crc kubenswrapper[4869]: I0127 10:40:16.041087 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="823da47a-6de6-4877-8210-e464a0271aa8" path="/var/lib/kubelet/pods/823da47a-6de6-4877-8210-e464a0271aa8/volumes" Jan 27 10:40:16 crc kubenswrapper[4869]: I0127 10:40:16.531868 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smctz/crc-debug-2l7gk" Jan 27 10:40:16 crc kubenswrapper[4869]: I0127 10:40:16.533080 4869 scope.go:117] "RemoveContainer" containerID="6eb7f603c3c7374d5ef8dce97fef4419a14fbf531392f113dcd642e14d4e0348" Jan 27 10:40:16 crc kubenswrapper[4869]: I0127 10:40:16.540261 4869 generic.go:334] "Generic (PLEG): container finished" podID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerID="f26f1fb2cee5006f1f75a2a2a614b9386b95a957b8a625a62d67f3bf0077c924" exitCode=0 Jan 27 10:40:16 crc kubenswrapper[4869]: I0127 10:40:16.540305 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerDied","Data":"f26f1fb2cee5006f1f75a2a2a614b9386b95a957b8a625a62d67f3bf0077c924"} Jan 27 10:40:16 crc kubenswrapper[4869]: I0127 10:40:16.540329 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerStarted","Data":"6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9"} Jan 27 10:40:16 crc kubenswrapper[4869]: I0127 10:40:16.558981 4869 scope.go:117] "RemoveContainer" containerID="64438db784a0954fd0841768bf7f10faeb9e3c3b8f6add6ec017e66a20b46490" Jan 27 10:40:21 crc kubenswrapper[4869]: I0127 10:40:21.032819 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:40:21 crc kubenswrapper[4869]: E0127 10:40:21.034963 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:40:29 crc kubenswrapper[4869]: I0127 10:40:29.033023 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:40:29 crc kubenswrapper[4869]: E0127 10:40:29.033960 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:40:31 crc kubenswrapper[4869]: I0127 10:40:31.230214 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8t5tg"] Jan 27 10:40:31 crc kubenswrapper[4869]: E0127 10:40:31.231313 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="823da47a-6de6-4877-8210-e464a0271aa8" containerName="container-00" Jan 27 10:40:31 crc kubenswrapper[4869]: I0127 10:40:31.231333 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="823da47a-6de6-4877-8210-e464a0271aa8" containerName="container-00" Jan 27 10:40:31 crc kubenswrapper[4869]: I0127 10:40:31.231551 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="823da47a-6de6-4877-8210-e464a0271aa8" containerName="container-00" Jan 27 10:40:31 crc kubenswrapper[4869]: I0127 10:40:31.236549 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8t5tg" Jan 27 10:40:31 crc kubenswrapper[4869]: I0127 10:40:31.252774 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8t5tg"] Jan 27 10:40:31 crc kubenswrapper[4869]: I0127 10:40:31.346202 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-utilities\") pod \"redhat-operators-8t5tg\" (UID: \"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962\") " pod="openshift-marketplace/redhat-operators-8t5tg" Jan 27 10:40:31 crc kubenswrapper[4869]: I0127 10:40:31.346252 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-catalog-content\") pod \"redhat-operators-8t5tg\" (UID: \"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962\") " pod="openshift-marketplace/redhat-operators-8t5tg" Jan 27 10:40:31 crc kubenswrapper[4869]: I0127 10:40:31.346277 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2glx\" (UniqueName: \"kubernetes.io/projected/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-kube-api-access-w2glx\") pod \"redhat-operators-8t5tg\" (UID: \"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962\") " pod="openshift-marketplace/redhat-operators-8t5tg" Jan 27 10:40:31 crc kubenswrapper[4869]: I0127 10:40:31.447240 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-utilities\") pod \"redhat-operators-8t5tg\" (UID: \"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962\") " pod="openshift-marketplace/redhat-operators-8t5tg" Jan 27 10:40:31 crc kubenswrapper[4869]: I0127 10:40:31.447287 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-catalog-content\") pod \"redhat-operators-8t5tg\" (UID: \"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962\") " pod="openshift-marketplace/redhat-operators-8t5tg" Jan 27 10:40:31 crc kubenswrapper[4869]: I0127 10:40:31.447315 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2glx\" (UniqueName: \"kubernetes.io/projected/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-kube-api-access-w2glx\") pod \"redhat-operators-8t5tg\" (UID: \"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962\") " pod="openshift-marketplace/redhat-operators-8t5tg" Jan 27 10:40:31 crc kubenswrapper[4869]: I0127 10:40:31.447741 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-utilities\") pod \"redhat-operators-8t5tg\" (UID: \"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962\") " pod="openshift-marketplace/redhat-operators-8t5tg" Jan 27 10:40:31 crc kubenswrapper[4869]: I0127 10:40:31.447967 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-catalog-content\") pod \"redhat-operators-8t5tg\" (UID: \"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962\") " pod="openshift-marketplace/redhat-operators-8t5tg" Jan 27 10:40:31 crc kubenswrapper[4869]: I0127 10:40:31.465088 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2glx\" (UniqueName: \"kubernetes.io/projected/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-kube-api-access-w2glx\") pod \"redhat-operators-8t5tg\" (UID: \"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962\") " pod="openshift-marketplace/redhat-operators-8t5tg" Jan 27 10:40:31 crc kubenswrapper[4869]: I0127 10:40:31.551355 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8t5tg" Jan 27 10:40:32 crc kubenswrapper[4869]: I0127 10:40:32.050325 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8t5tg"] Jan 27 10:40:32 crc kubenswrapper[4869]: I0127 10:40:32.709013 4869 generic.go:334] "Generic (PLEG): container finished" podID="52c3544c-2d4e-4a12-84c0-6c9a3cc9a962" containerID="4159bb1023c22272fd86d069e8e21d5f65911eaee3b3c55bb1be5da396871e9f" exitCode=0 Jan 27 10:40:32 crc kubenswrapper[4869]: I0127 10:40:32.709253 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t5tg" event={"ID":"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962","Type":"ContainerDied","Data":"4159bb1023c22272fd86d069e8e21d5f65911eaee3b3c55bb1be5da396871e9f"} Jan 27 10:40:32 crc kubenswrapper[4869]: I0127 10:40:32.709325 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t5tg" event={"ID":"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962","Type":"ContainerStarted","Data":"1d0537681107278aa296bc78de61b56c212a71fb4d0db504b3617f59c8a0208c"} Jan 27 10:40:34 crc kubenswrapper[4869]: I0127 10:40:34.723494 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t5tg" event={"ID":"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962","Type":"ContainerStarted","Data":"48239ca06fda4eca857bf922c689f1c84836781b549dec555aaf89ebe052f6cc"} Jan 27 10:40:35 crc kubenswrapper[4869]: I0127 10:40:35.738808 4869 generic.go:334] "Generic (PLEG): container finished" podID="52c3544c-2d4e-4a12-84c0-6c9a3cc9a962" containerID="48239ca06fda4eca857bf922c689f1c84836781b549dec555aaf89ebe052f6cc" exitCode=0 Jan 27 10:40:35 crc kubenswrapper[4869]: I0127 10:40:35.740079 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t5tg" event={"ID":"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962","Type":"ContainerDied","Data":"48239ca06fda4eca857bf922c689f1c84836781b549dec555aaf89ebe052f6cc"} Jan 27 10:40:36 crc kubenswrapper[4869]: I0127 10:40:36.037239 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:40:36 crc kubenswrapper[4869]: E0127 10:40:36.037746 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:40:36 crc kubenswrapper[4869]: I0127 10:40:36.051686 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-74f6bcbc87-lvhqh_ade660e6-68ee-4d24-a454-26bbb5f89008/init/0.log" Jan 27 10:40:36 crc kubenswrapper[4869]: I0127 10:40:36.218163 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-74f6bcbc87-lvhqh_ade660e6-68ee-4d24-a454-26bbb5f89008/dnsmasq-dns/0.log" Jan 27 10:40:36 crc kubenswrapper[4869]: I0127 10:40:36.247323 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_01cc337f-b143-40db-b4d4-cc66a1549639/kube-state-metrics/0.log" Jan 27 10:40:36 crc kubenswrapper[4869]: I0127 10:40:36.248740 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-74f6bcbc87-lvhqh_ade660e6-68ee-4d24-a454-26bbb5f89008/init/0.log" Jan 27 10:40:36 crc kubenswrapper[4869]: I0127 10:40:36.479097 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_518f6a90-a761-4aba-9740-c3aef7d8b0c4/mysql-bootstrap/0.log" Jan 27 10:40:36 crc kubenswrapper[4869]: I0127 10:40:36.529473 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_94bea268-7e39-4aeb-a45c-8008593eb45c/memcached/0.log" Jan 27 10:40:36 crc kubenswrapper[4869]: I0127 10:40:36.720166 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_518f6a90-a761-4aba-9740-c3aef7d8b0c4/galera/0.log" Jan 27 10:40:36 crc kubenswrapper[4869]: I0127 10:40:36.747676 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t5tg" event={"ID":"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962","Type":"ContainerStarted","Data":"ad30ea88660cc97d65fc2c3e2d5aa26726e9609fcbd7fa046ee39dc4cfd35e62"} Jan 27 10:40:36 crc kubenswrapper[4869]: I0127 10:40:36.766271 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8t5tg" podStartSLOduration=2.315019832 podStartE2EDuration="5.766257416s" podCreationTimestamp="2026-01-27 10:40:31 +0000 UTC" firstStartedPulling="2026-01-27 10:40:32.71061195 +0000 UTC m=+2801.331036033" lastFinishedPulling="2026-01-27 10:40:36.161849534 +0000 UTC m=+2804.782273617" observedRunningTime="2026-01-27 10:40:36.762033624 +0000 UTC m=+2805.382457717" watchObservedRunningTime="2026-01-27 10:40:36.766257416 +0000 UTC m=+2805.386681499" Jan 27 10:40:36 crc kubenswrapper[4869]: I0127 10:40:36.794353 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9622ab05-c494-4c2b-b376-6f82ded8bdc5/mysql-bootstrap/0.log" Jan 27 10:40:36 crc kubenswrapper[4869]: I0127 10:40:36.822089 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_518f6a90-a761-4aba-9740-c3aef7d8b0c4/mysql-bootstrap/0.log" Jan 27 10:40:37 crc kubenswrapper[4869]: I0127 10:40:37.032458 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9622ab05-c494-4c2b-b376-6f82ded8bdc5/galera/0.log" Jan 27 10:40:37 crc kubenswrapper[4869]: I0127 10:40:37.131721 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_9622ab05-c494-4c2b-b376-6f82ded8bdc5/mysql-bootstrap/0.log" Jan 27 10:40:37 crc kubenswrapper[4869]: I0127 10:40:37.133191 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xl4mk_861897b2-ebbe-48ba-851c-e0c902bf8f7f/openstack-network-exporter/0.log" Jan 27 10:40:37 crc kubenswrapper[4869]: I0127 10:40:37.258506 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jd977_795fb025-6527-42e5-b95f-119a55caf010/ovsdb-server-init/0.log" Jan 27 10:40:37 crc kubenswrapper[4869]: I0127 10:40:37.431713 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jd977_795fb025-6527-42e5-b95f-119a55caf010/ovsdb-server-init/0.log" Jan 27 10:40:37 crc kubenswrapper[4869]: I0127 10:40:37.444199 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jd977_795fb025-6527-42e5-b95f-119a55caf010/ovsdb-server/0.log" Jan 27 10:40:37 crc kubenswrapper[4869]: I0127 10:40:37.466075 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jd977_795fb025-6527-42e5-b95f-119a55caf010/ovs-vswitchd/0.log" Jan 27 10:40:37 crc kubenswrapper[4869]: I0127 10:40:37.618235 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qf659_e545b253-d74a-43e1-9a14-990ea5784f16/ovn-controller/0.log" Jan 27 10:40:37 crc kubenswrapper[4869]: I0127 10:40:37.662152 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0ddfa973-a8e8-4003-a986-61838793a923/openstack-network-exporter/0.log" Jan 27 10:40:37 crc kubenswrapper[4869]: I0127 10:40:37.724418 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0ddfa973-a8e8-4003-a986-61838793a923/ovn-northd/0.log" Jan 27 10:40:37 crc kubenswrapper[4869]: I0127 10:40:37.859358 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e88fbb7c-3771-4bd5-a511-af923a24a69f/ovsdbserver-nb/0.log" Jan 27 10:40:37 crc kubenswrapper[4869]: I0127 10:40:37.875658 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e88fbb7c-3771-4bd5-a511-af923a24a69f/openstack-network-exporter/0.log" Jan 27 10:40:38 crc kubenswrapper[4869]: I0127 10:40:38.063288 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_66641dc3-4cf2-4418-905a-fe1cff14e999/openstack-network-exporter/0.log" Jan 27 10:40:38 crc kubenswrapper[4869]: I0127 10:40:38.156689 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_66641dc3-4cf2-4418-905a-fe1cff14e999/ovsdbserver-sb/0.log" Jan 27 10:40:38 crc kubenswrapper[4869]: I0127 10:40:38.212007 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80/setup-container/0.log" Jan 27 10:40:38 crc kubenswrapper[4869]: I0127 10:40:38.408375 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80/rabbitmq/10.log" Jan 27 10:40:38 crc kubenswrapper[4869]: I0127 10:40:38.409187 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80/rabbitmq/10.log" Jan 27 10:40:38 crc kubenswrapper[4869]: I0127 10:40:38.439822 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80/setup-container/0.log" Jan 27 10:40:38 crc kubenswrapper[4869]: I0127 10:40:38.588084 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_61608a46-7d70-4a1b-ac50-6238d5bf7ad9/setup-container/0.log" Jan 27 10:40:38 crc kubenswrapper[4869]: I0127 10:40:38.748382 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_61608a46-7d70-4a1b-ac50-6238d5bf7ad9/rabbitmq/10.log" Jan 27 10:40:38 crc kubenswrapper[4869]: I0127 10:40:38.756253 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_61608a46-7d70-4a1b-ac50-6238d5bf7ad9/rabbitmq/10.log" Jan 27 10:40:38 crc kubenswrapper[4869]: I0127 10:40:38.799049 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_61608a46-7d70-4a1b-ac50-6238d5bf7ad9/setup-container/0.log" Jan 27 10:40:38 crc kubenswrapper[4869]: I0127 10:40:38.902988 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-nn56w_f91198cd-1581-4ca7-9be2-98da975eefd7/swift-ring-rebalance/0.log" Jan 27 10:40:38 crc kubenswrapper[4869]: I0127 10:40:38.997719 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0429a74c-af6a-45f1-9ca2-b66dcd47ca38/account-reaper/0.log" Jan 27 10:40:39 crc kubenswrapper[4869]: I0127 10:40:39.003946 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0429a74c-af6a-45f1-9ca2-b66dcd47ca38/account-auditor/0.log" Jan 27 10:40:39 crc kubenswrapper[4869]: I0127 10:40:39.125220 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0429a74c-af6a-45f1-9ca2-b66dcd47ca38/account-replicator/0.log" Jan 27 10:40:39 crc kubenswrapper[4869]: I0127 10:40:39.147344 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0429a74c-af6a-45f1-9ca2-b66dcd47ca38/account-server/0.log" Jan 27 10:40:39 crc kubenswrapper[4869]: I0127 10:40:39.203663 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0429a74c-af6a-45f1-9ca2-b66dcd47ca38/container-auditor/0.log" Jan 27 10:40:39 crc kubenswrapper[4869]: I0127 10:40:39.214366 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0429a74c-af6a-45f1-9ca2-b66dcd47ca38/container-replicator/0.log" Jan 27 10:40:39 crc kubenswrapper[4869]: I0127 10:40:39.325112 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0429a74c-af6a-45f1-9ca2-b66dcd47ca38/container-server/0.log" Jan 27 10:40:39 crc kubenswrapper[4869]: I0127 10:40:39.357417 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0429a74c-af6a-45f1-9ca2-b66dcd47ca38/container-updater/0.log" Jan 27 10:40:39 crc kubenswrapper[4869]: I0127 10:40:39.447630 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0429a74c-af6a-45f1-9ca2-b66dcd47ca38/object-auditor/0.log" Jan 27 10:40:39 crc kubenswrapper[4869]: I0127 10:40:39.468808 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0429a74c-af6a-45f1-9ca2-b66dcd47ca38/object-expirer/0.log" Jan 27 10:40:39 crc kubenswrapper[4869]: I0127 10:40:39.576148 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0429a74c-af6a-45f1-9ca2-b66dcd47ca38/object-replicator/0.log" Jan 27 10:40:39 crc kubenswrapper[4869]: I0127 10:40:39.641567 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0429a74c-af6a-45f1-9ca2-b66dcd47ca38/object-updater/0.log" Jan 27 10:40:39 crc kubenswrapper[4869]: I0127 10:40:39.663566 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0429a74c-af6a-45f1-9ca2-b66dcd47ca38/object-server/0.log" Jan 27 10:40:39 crc kubenswrapper[4869]: I0127 10:40:39.678451 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0429a74c-af6a-45f1-9ca2-b66dcd47ca38/rsync/0.log" Jan 27 10:40:39 crc kubenswrapper[4869]: I0127 10:40:39.775711 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0429a74c-af6a-45f1-9ca2-b66dcd47ca38/swift-recon-cron/0.log" Jan 27 10:40:41 crc kubenswrapper[4869]: I0127 10:40:41.033462 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:40:41 crc kubenswrapper[4869]: E0127 10:40:41.034200 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:40:41 crc kubenswrapper[4869]: I0127 10:40:41.552012 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8t5tg" Jan 27 10:40:41 crc kubenswrapper[4869]: I0127 10:40:41.552266 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8t5tg" Jan 27 10:40:42 crc kubenswrapper[4869]: I0127 10:40:42.590463 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8t5tg" podUID="52c3544c-2d4e-4a12-84c0-6c9a3cc9a962" containerName="registry-server" probeResult="failure" output=< Jan 27 10:40:42 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Jan 27 10:40:42 crc kubenswrapper[4869]: > Jan 27 10:40:51 crc kubenswrapper[4869]: I0127 10:40:51.032992 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:40:51 crc kubenswrapper[4869]: E0127 10:40:51.033719 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:40:51 crc kubenswrapper[4869]: I0127 10:40:51.611314 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8t5tg" Jan 27 10:40:51 crc kubenswrapper[4869]: I0127 10:40:51.669220 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8t5tg" Jan 27 10:40:51 crc kubenswrapper[4869]: I0127 10:40:51.841170 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8t5tg"] Jan 27 10:40:52 crc kubenswrapper[4869]: I0127 10:40:52.862682 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8t5tg" podUID="52c3544c-2d4e-4a12-84c0-6c9a3cc9a962" containerName="registry-server" containerID="cri-o://ad30ea88660cc97d65fc2c3e2d5aa26726e9609fcbd7fa046ee39dc4cfd35e62" gracePeriod=2 Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.305937 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8t5tg" Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.421050 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-catalog-content\") pod \"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962\" (UID: \"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962\") " Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.421117 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-utilities\") pod \"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962\" (UID: \"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962\") " Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.421200 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2glx\" (UniqueName: \"kubernetes.io/projected/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-kube-api-access-w2glx\") pod \"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962\" (UID: \"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962\") " Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.422003 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-utilities" (OuterVolumeSpecName: "utilities") pod "52c3544c-2d4e-4a12-84c0-6c9a3cc9a962" (UID: "52c3544c-2d4e-4a12-84c0-6c9a3cc9a962"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.427050 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-kube-api-access-w2glx" (OuterVolumeSpecName: "kube-api-access-w2glx") pod "52c3544c-2d4e-4a12-84c0-6c9a3cc9a962" (UID: "52c3544c-2d4e-4a12-84c0-6c9a3cc9a962"). InnerVolumeSpecName "kube-api-access-w2glx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.522526 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.522551 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2glx\" (UniqueName: \"kubernetes.io/projected/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-kube-api-access-w2glx\") on node \"crc\" DevicePath \"\"" Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.544377 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52c3544c-2d4e-4a12-84c0-6c9a3cc9a962" (UID: "52c3544c-2d4e-4a12-84c0-6c9a3cc9a962"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.624285 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.870069 4869 generic.go:334] "Generic (PLEG): container finished" podID="52c3544c-2d4e-4a12-84c0-6c9a3cc9a962" containerID="ad30ea88660cc97d65fc2c3e2d5aa26726e9609fcbd7fa046ee39dc4cfd35e62" exitCode=0 Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.870116 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8t5tg" Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.870115 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t5tg" event={"ID":"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962","Type":"ContainerDied","Data":"ad30ea88660cc97d65fc2c3e2d5aa26726e9609fcbd7fa046ee39dc4cfd35e62"} Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.870252 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t5tg" event={"ID":"52c3544c-2d4e-4a12-84c0-6c9a3cc9a962","Type":"ContainerDied","Data":"1d0537681107278aa296bc78de61b56c212a71fb4d0db504b3617f59c8a0208c"} Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.870284 4869 scope.go:117] "RemoveContainer" containerID="ad30ea88660cc97d65fc2c3e2d5aa26726e9609fcbd7fa046ee39dc4cfd35e62" Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.894900 4869 scope.go:117] "RemoveContainer" containerID="48239ca06fda4eca857bf922c689f1c84836781b549dec555aaf89ebe052f6cc" Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.900722 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8t5tg"] Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.908825 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8t5tg"] Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.931680 4869 scope.go:117] "RemoveContainer" containerID="4159bb1023c22272fd86d069e8e21d5f65911eaee3b3c55bb1be5da396871e9f" Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.951177 4869 scope.go:117] "RemoveContainer" containerID="ad30ea88660cc97d65fc2c3e2d5aa26726e9609fcbd7fa046ee39dc4cfd35e62" Jan 27 10:40:53 crc kubenswrapper[4869]: E0127 10:40:53.951622 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad30ea88660cc97d65fc2c3e2d5aa26726e9609fcbd7fa046ee39dc4cfd35e62\": container with ID starting with ad30ea88660cc97d65fc2c3e2d5aa26726e9609fcbd7fa046ee39dc4cfd35e62 not found: ID does not exist" containerID="ad30ea88660cc97d65fc2c3e2d5aa26726e9609fcbd7fa046ee39dc4cfd35e62" Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.951655 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad30ea88660cc97d65fc2c3e2d5aa26726e9609fcbd7fa046ee39dc4cfd35e62"} err="failed to get container status \"ad30ea88660cc97d65fc2c3e2d5aa26726e9609fcbd7fa046ee39dc4cfd35e62\": rpc error: code = NotFound desc = could not find container \"ad30ea88660cc97d65fc2c3e2d5aa26726e9609fcbd7fa046ee39dc4cfd35e62\": container with ID starting with ad30ea88660cc97d65fc2c3e2d5aa26726e9609fcbd7fa046ee39dc4cfd35e62 not found: ID does not exist" Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.951677 4869 scope.go:117] "RemoveContainer" containerID="48239ca06fda4eca857bf922c689f1c84836781b549dec555aaf89ebe052f6cc" Jan 27 10:40:53 crc kubenswrapper[4869]: E0127 10:40:53.952075 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48239ca06fda4eca857bf922c689f1c84836781b549dec555aaf89ebe052f6cc\": container with ID starting with 48239ca06fda4eca857bf922c689f1c84836781b549dec555aaf89ebe052f6cc not found: ID does not exist" containerID="48239ca06fda4eca857bf922c689f1c84836781b549dec555aaf89ebe052f6cc" Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.952105 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48239ca06fda4eca857bf922c689f1c84836781b549dec555aaf89ebe052f6cc"} err="failed to get container status \"48239ca06fda4eca857bf922c689f1c84836781b549dec555aaf89ebe052f6cc\": rpc error: code = NotFound desc = could not find container \"48239ca06fda4eca857bf922c689f1c84836781b549dec555aaf89ebe052f6cc\": container with ID starting with 48239ca06fda4eca857bf922c689f1c84836781b549dec555aaf89ebe052f6cc not found: ID does not exist" Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.952120 4869 scope.go:117] "RemoveContainer" containerID="4159bb1023c22272fd86d069e8e21d5f65911eaee3b3c55bb1be5da396871e9f" Jan 27 10:40:53 crc kubenswrapper[4869]: E0127 10:40:53.952375 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4159bb1023c22272fd86d069e8e21d5f65911eaee3b3c55bb1be5da396871e9f\": container with ID starting with 4159bb1023c22272fd86d069e8e21d5f65911eaee3b3c55bb1be5da396871e9f not found: ID does not exist" containerID="4159bb1023c22272fd86d069e8e21d5f65911eaee3b3c55bb1be5da396871e9f" Jan 27 10:40:53 crc kubenswrapper[4869]: I0127 10:40:53.952396 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4159bb1023c22272fd86d069e8e21d5f65911eaee3b3c55bb1be5da396871e9f"} err="failed to get container status \"4159bb1023c22272fd86d069e8e21d5f65911eaee3b3c55bb1be5da396871e9f\": rpc error: code = NotFound desc = could not find container \"4159bb1023c22272fd86d069e8e21d5f65911eaee3b3c55bb1be5da396871e9f\": container with ID starting with 4159bb1023c22272fd86d069e8e21d5f65911eaee3b3c55bb1be5da396871e9f not found: ID does not exist" Jan 27 10:40:54 crc kubenswrapper[4869]: I0127 10:40:54.040785 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52c3544c-2d4e-4a12-84c0-6c9a3cc9a962" path="/var/lib/kubelet/pods/52c3544c-2d4e-4a12-84c0-6c9a3cc9a962/volumes" Jan 27 10:40:55 crc kubenswrapper[4869]: I0127 10:40:55.973356 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4_c3dd40da-058c-45a7-89be-624d27129825/util/0.log" Jan 27 10:40:56 crc kubenswrapper[4869]: I0127 10:40:56.033706 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:40:56 crc kubenswrapper[4869]: E0127 10:40:56.033979 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:40:56 crc kubenswrapper[4869]: I0127 10:40:56.144297 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4_c3dd40da-058c-45a7-89be-624d27129825/util/0.log" Jan 27 10:40:56 crc kubenswrapper[4869]: I0127 10:40:56.158806 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4_c3dd40da-058c-45a7-89be-624d27129825/pull/0.log" Jan 27 10:40:56 crc kubenswrapper[4869]: I0127 10:40:56.187252 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4_c3dd40da-058c-45a7-89be-624d27129825/pull/0.log" Jan 27 10:40:56 crc kubenswrapper[4869]: I0127 10:40:56.332526 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4_c3dd40da-058c-45a7-89be-624d27129825/util/0.log" Jan 27 10:40:56 crc kubenswrapper[4869]: I0127 10:40:56.346440 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4_c3dd40da-058c-45a7-89be-624d27129825/extract/0.log" Jan 27 10:40:56 crc kubenswrapper[4869]: I0127 10:40:56.354255 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_44d2712e1879b28cf16abf3bea80c55f88c48f01b45656b4932e9de1f3m5lb4_c3dd40da-058c-45a7-89be-624d27129825/pull/0.log" Jan 27 10:40:56 crc kubenswrapper[4869]: I0127 10:40:56.487622 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-7p2lg_b8e9717b-6786-4882-99ae-bbcaa887e310/manager/0.log" Jan 27 10:40:56 crc kubenswrapper[4869]: I0127 10:40:56.515751 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-7tzv4_cfdd145e-d7b8-4078-aaa6-9b9827749b9a/manager/0.log" Jan 27 10:40:56 crc kubenswrapper[4869]: I0127 10:40:56.641320 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-g5vdg_60bb147d-e703-4ac4-8068-aa416605b7b5/manager/0.log" Jan 27 10:40:56 crc kubenswrapper[4869]: I0127 10:40:56.729309 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-gbl72_22116ec0-0e77-4752-b374-ad20f73dc3f4/manager/0.log" Jan 27 10:40:56 crc kubenswrapper[4869]: I0127 10:40:56.830038 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-tjf5f_96f59ef6-bb4a-453d-9de2-ba5e0933df0a/manager/0.log" Jan 27 10:40:56 crc kubenswrapper[4869]: I0127 10:40:56.912383 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-8rhfb_1e946d3d-37fb-4bb6-8c8f-b7dcba782889/manager/0.log" Jan 27 10:40:57 crc kubenswrapper[4869]: I0127 10:40:57.091776 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-pvgqp_5a1cd9b4-00f9-430f-8857-718672e03003/manager/0.log" Jan 27 10:40:57 crc kubenswrapper[4869]: I0127 10:40:57.118062 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7f6fb95f66-4xhrc_dffa1b35-d981-4c5c-8df0-341e6a5941a6/manager/0.log" Jan 27 10:40:57 crc kubenswrapper[4869]: I0127 10:40:57.344610 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-kph4d_95e36175-15e3-4f1f-8063-5f3bade317b6/manager/0.log" Jan 27 10:40:57 crc kubenswrapper[4869]: I0127 10:40:57.345501 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-g799f_fa54c4d9-8d7b-4284-bb64-d21d21e9a83e/manager/0.log" Jan 27 10:40:57 crc kubenswrapper[4869]: I0127 10:40:57.500164 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-8bvfh_a3cdb036-7094-48e3-9d3d-8699ece77b88/manager/0.log" Jan 27 10:40:57 crc kubenswrapper[4869]: I0127 10:40:57.530898 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-dj764_46e73157-89a7-4ca4-b71f-2f2e05181ea1/manager/0.log" Jan 27 10:40:57 crc kubenswrapper[4869]: I0127 10:40:57.687870 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-h6pnx_3f283669-e4aa-48ca-b487-c1f34759f97a/manager/0.log" Jan 27 10:40:57 crc kubenswrapper[4869]: I0127 10:40:57.690413 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-xnt8t_33da9d5c-c09e-492d-b23d-6cc5ceaef8b9/manager/0.log" Jan 27 10:40:57 crc kubenswrapper[4869]: I0127 10:40:57.863420 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8544bgr2_facb1993-c676-4104-9090-8f8b4d8576ed/manager/0.log" Jan 27 10:40:58 crc kubenswrapper[4869]: I0127 10:40:58.029134 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-866665f5dd-q6mmh_bcf2849f-1329-4523-83d0-4ad8ec004ce1/operator/0.log" Jan 27 10:40:58 crc kubenswrapper[4869]: I0127 10:40:58.165638 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-l45ns_88ac0b56-ccfe-4c0a-b0eb-b56d1c6ef0fa/registry-server/0.log" Jan 27 10:40:58 crc kubenswrapper[4869]: I0127 10:40:58.290571 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7db7c99649-zbtgz_6c54aaba-e55e-4168-9078-0de1b3f7e7fe/manager/0.log" Jan 27 10:40:58 crc kubenswrapper[4869]: I0127 10:40:58.389907 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-bjt9d_24283d6a-6945-4ce8-991e-25102b2a0bea/manager/0.log" Jan 27 10:40:58 crc kubenswrapper[4869]: I0127 10:40:58.441838 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-rscb2_50c0b859-fc98-4727-a1e2-cd0397e17bb7/manager/0.log" Jan 27 10:40:58 crc kubenswrapper[4869]: I0127 10:40:58.586421 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2vjxv_bbc5d8d8-48d4-4b4f-96ee-87e21cf68ed2/operator/0.log" Jan 27 10:40:58 crc kubenswrapper[4869]: I0127 10:40:58.652090 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-g2cx7_e1367e9a-318d-4800-926b-e0fe5cadf9b7/manager/0.log" Jan 27 10:40:58 crc kubenswrapper[4869]: I0127 10:40:58.819772 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-6xdnp_5b08b641-c912-4e41-911c-6d46e9d589c9/manager/0.log" Jan 27 10:40:58 crc kubenswrapper[4869]: I0127 10:40:58.864103 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-hj2l8_c2bdaaef-bf80-4141-9d4f-d0942aa15e4e/manager/0.log" Jan 27 10:40:58 crc kubenswrapper[4869]: I0127 10:40:58.998424 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-dgc6k_61deff8e-df98-4cc4-86de-f60d12c8cfb9/manager/0.log" Jan 27 10:41:05 crc kubenswrapper[4869]: I0127 10:41:05.033032 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:41:05 crc kubenswrapper[4869]: E0127 10:41:05.033780 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:41:09 crc kubenswrapper[4869]: I0127 10:41:09.033491 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:41:09 crc kubenswrapper[4869]: E0127 10:41:09.034127 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:41:16 crc kubenswrapper[4869]: I0127 10:41:16.033792 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:41:16 crc kubenswrapper[4869]: E0127 10:41:16.035678 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:41:16 crc kubenswrapper[4869]: I0127 10:41:16.701662 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-j9w4z_2ecc898c-2377-4e6f-a02e-028eeca5eec8/control-plane-machine-set-operator/0.log" Jan 27 10:41:16 crc kubenswrapper[4869]: I0127 10:41:16.884572 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-clff8_670d8b6b-95a2-4711-98db-3f71e295093b/kube-rbac-proxy/0.log" Jan 27 10:41:16 crc kubenswrapper[4869]: I0127 10:41:16.898766 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-clff8_670d8b6b-95a2-4711-98db-3f71e295093b/machine-api-operator/0.log" Jan 27 10:41:24 crc kubenswrapper[4869]: I0127 10:41:24.033716 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:41:24 crc kubenswrapper[4869]: E0127 10:41:24.034571 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:41:28 crc kubenswrapper[4869]: I0127 10:41:28.032885 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:41:29 crc kubenswrapper[4869]: I0127 10:41:29.132947 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerStarted","Data":"738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d"} Jan 27 10:41:29 crc kubenswrapper[4869]: I0127 10:41:29.133591 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:41:29 crc kubenswrapper[4869]: I0127 10:41:29.942347 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-gdtvw_ae0abba1-31d1-4b88-92d9-4ddf5a80a00c/cert-manager-controller/0.log" Jan 27 10:41:30 crc kubenswrapper[4869]: I0127 10:41:30.041335 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-vgctl_d066acfd-4f90-4e0d-a241-03b54d7d2ca3/cert-manager-cainjector/0.log" Jan 27 10:41:30 crc kubenswrapper[4869]: I0127 10:41:30.120903 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-sbsgl_8d0480d6-1706-48a9-ba8e-baa99011f330/cert-manager-webhook/0.log" Jan 27 10:41:32 crc kubenswrapper[4869]: E0127 10:41:32.502215 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc9f9a53_b2d4_4a7f_a4ad_5fe5f6b99f80.slice/crio-conmon-738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d.scope\": RecentStats: unable to find data in memory cache]" Jan 27 10:41:33 crc kubenswrapper[4869]: I0127 10:41:33.164345 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" exitCode=0 Jan 27 10:41:33 crc kubenswrapper[4869]: I0127 10:41:33.164406 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerDied","Data":"738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d"} Jan 27 10:41:33 crc kubenswrapper[4869]: I0127 10:41:33.164620 4869 scope.go:117] "RemoveContainer" containerID="a50ee09f03e99ef11dfbd76d0cbe1c9c870d2a837aad90c2777a7976d60f2e6d" Jan 27 10:41:33 crc kubenswrapper[4869]: I0127 10:41:33.165713 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:41:33 crc kubenswrapper[4869]: E0127 10:41:33.166229 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:41:37 crc kubenswrapper[4869]: I0127 10:41:37.033036 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:41:38 crc kubenswrapper[4869]: I0127 10:41:38.212417 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerStarted","Data":"50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f"} Jan 27 10:41:38 crc kubenswrapper[4869]: I0127 10:41:38.213736 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 10:41:42 crc kubenswrapper[4869]: I0127 10:41:42.255079 4869 generic.go:334] "Generic (PLEG): container finished" podID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" exitCode=0 Jan 27 10:41:42 crc kubenswrapper[4869]: I0127 10:41:42.255332 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerDied","Data":"50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f"} Jan 27 10:41:42 crc kubenswrapper[4869]: I0127 10:41:42.255610 4869 scope.go:117] "RemoveContainer" containerID="b4150f46a214b4b7e3c6175b4a4f8c60516cf8b2dc71731a2d7f7134826e5fbb" Jan 27 10:41:42 crc kubenswrapper[4869]: I0127 10:41:42.256241 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:41:42 crc kubenswrapper[4869]: E0127 10:41:42.256478 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:41:43 crc kubenswrapper[4869]: I0127 10:41:43.461273 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-kndhh_f9b6b6c3-2c4f-42f9-93a0-1f3b97f055cd/nmstate-console-plugin/0.log" Jan 27 10:41:43 crc kubenswrapper[4869]: I0127 10:41:43.694668 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-gksm8_51ef0cba-8dc4-4ec0-bd06-2db6d2cf6862/nmstate-handler/0.log" Jan 27 10:41:43 crc kubenswrapper[4869]: I0127 10:41:43.856813 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mnj56_66fb1b74-3877-435b-85b5-4321b9b074a8/kube-rbac-proxy/0.log" Jan 27 10:41:43 crc kubenswrapper[4869]: I0127 10:41:43.996023 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-mnj56_66fb1b74-3877-435b-85b5-4321b9b074a8/nmstate-metrics/0.log" Jan 27 10:41:44 crc kubenswrapper[4869]: I0127 10:41:44.527616 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-pv7vd_427bab1b-d1a1-4106-8d8b-a6e34368576b/nmstate-operator/0.log" Jan 27 10:41:44 crc kubenswrapper[4869]: I0127 10:41:44.596106 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-7brsd_4e9e5492-1772-4814-81db-514251142de5/nmstate-webhook/0.log" Jan 27 10:41:46 crc kubenswrapper[4869]: I0127 10:41:46.033438 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:41:46 crc kubenswrapper[4869]: E0127 10:41:46.033715 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:41:58 crc kubenswrapper[4869]: I0127 10:41:58.033350 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:41:58 crc kubenswrapper[4869]: E0127 10:41:58.034223 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:41:59 crc kubenswrapper[4869]: I0127 10:41:59.033775 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:41:59 crc kubenswrapper[4869]: E0127 10:41:59.034151 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:42:07 crc kubenswrapper[4869]: I0127 10:42:07.682657 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rdqt6"] Jan 27 10:42:07 crc kubenswrapper[4869]: E0127 10:42:07.683565 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52c3544c-2d4e-4a12-84c0-6c9a3cc9a962" containerName="extract-utilities" Jan 27 10:42:07 crc kubenswrapper[4869]: I0127 10:42:07.683580 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="52c3544c-2d4e-4a12-84c0-6c9a3cc9a962" containerName="extract-utilities" Jan 27 10:42:07 crc kubenswrapper[4869]: E0127 10:42:07.683594 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52c3544c-2d4e-4a12-84c0-6c9a3cc9a962" containerName="registry-server" Jan 27 10:42:07 crc kubenswrapper[4869]: I0127 10:42:07.683602 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="52c3544c-2d4e-4a12-84c0-6c9a3cc9a962" containerName="registry-server" Jan 27 10:42:07 crc kubenswrapper[4869]: E0127 10:42:07.683626 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52c3544c-2d4e-4a12-84c0-6c9a3cc9a962" containerName="extract-content" Jan 27 10:42:07 crc kubenswrapper[4869]: I0127 10:42:07.683637 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="52c3544c-2d4e-4a12-84c0-6c9a3cc9a962" containerName="extract-content" Jan 27 10:42:07 crc kubenswrapper[4869]: I0127 10:42:07.683927 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="52c3544c-2d4e-4a12-84c0-6c9a3cc9a962" containerName="registry-server" Jan 27 10:42:07 crc kubenswrapper[4869]: I0127 10:42:07.686106 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdqt6" Jan 27 10:42:07 crc kubenswrapper[4869]: I0127 10:42:07.695159 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdqt6"] Jan 27 10:42:07 crc kubenswrapper[4869]: I0127 10:42:07.795992 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-catalog-content\") pod \"redhat-marketplace-rdqt6\" (UID: \"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7\") " pod="openshift-marketplace/redhat-marketplace-rdqt6" Jan 27 10:42:07 crc kubenswrapper[4869]: I0127 10:42:07.796046 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfq2z\" (UniqueName: \"kubernetes.io/projected/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-kube-api-access-pfq2z\") pod \"redhat-marketplace-rdqt6\" (UID: \"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7\") " pod="openshift-marketplace/redhat-marketplace-rdqt6" Jan 27 10:42:07 crc kubenswrapper[4869]: I0127 10:42:07.796185 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-utilities\") pod \"redhat-marketplace-rdqt6\" (UID: \"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7\") " pod="openshift-marketplace/redhat-marketplace-rdqt6" Jan 27 10:42:07 crc kubenswrapper[4869]: I0127 10:42:07.897794 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-utilities\") pod \"redhat-marketplace-rdqt6\" (UID: \"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7\") " pod="openshift-marketplace/redhat-marketplace-rdqt6" Jan 27 10:42:07 crc kubenswrapper[4869]: I0127 10:42:07.897906 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-catalog-content\") pod \"redhat-marketplace-rdqt6\" (UID: \"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7\") " pod="openshift-marketplace/redhat-marketplace-rdqt6" Jan 27 10:42:07 crc kubenswrapper[4869]: I0127 10:42:07.897942 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfq2z\" (UniqueName: \"kubernetes.io/projected/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-kube-api-access-pfq2z\") pod \"redhat-marketplace-rdqt6\" (UID: \"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7\") " pod="openshift-marketplace/redhat-marketplace-rdqt6" Jan 27 10:42:07 crc kubenswrapper[4869]: I0127 10:42:07.898619 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-utilities\") pod \"redhat-marketplace-rdqt6\" (UID: \"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7\") " pod="openshift-marketplace/redhat-marketplace-rdqt6" Jan 27 10:42:07 crc kubenswrapper[4869]: I0127 10:42:07.898713 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-catalog-content\") pod \"redhat-marketplace-rdqt6\" (UID: \"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7\") " pod="openshift-marketplace/redhat-marketplace-rdqt6" Jan 27 10:42:07 crc kubenswrapper[4869]: I0127 10:42:07.923278 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfq2z\" (UniqueName: \"kubernetes.io/projected/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-kube-api-access-pfq2z\") pod \"redhat-marketplace-rdqt6\" (UID: \"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7\") " pod="openshift-marketplace/redhat-marketplace-rdqt6" Jan 27 10:42:08 crc kubenswrapper[4869]: I0127 10:42:08.024360 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdqt6" Jan 27 10:42:08 crc kubenswrapper[4869]: W0127 10:42:08.508695 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc08a03d8_5435_4d4b_8c2c_dbe6fadeb4d7.slice/crio-33af381ae1306de8a2cbcfd59697923059a68d18a020cce269e0fed55361f200 WatchSource:0}: Error finding container 33af381ae1306de8a2cbcfd59697923059a68d18a020cce269e0fed55361f200: Status 404 returned error can't find the container with id 33af381ae1306de8a2cbcfd59697923059a68d18a020cce269e0fed55361f200 Jan 27 10:42:08 crc kubenswrapper[4869]: I0127 10:42:08.511577 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdqt6"] Jan 27 10:42:09 crc kubenswrapper[4869]: I0127 10:42:09.486593 4869 generic.go:334] "Generic (PLEG): container finished" podID="c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7" containerID="536a334e20d82f3520e35f9865bd8125c98a18d8e33d92b1a6cd87f879a6b9a1" exitCode=0 Jan 27 10:42:09 crc kubenswrapper[4869]: I0127 10:42:09.486709 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdqt6" event={"ID":"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7","Type":"ContainerDied","Data":"536a334e20d82f3520e35f9865bd8125c98a18d8e33d92b1a6cd87f879a6b9a1"} Jan 27 10:42:09 crc kubenswrapper[4869]: I0127 10:42:09.486919 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdqt6" event={"ID":"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7","Type":"ContainerStarted","Data":"33af381ae1306de8a2cbcfd59697923059a68d18a020cce269e0fed55361f200"} Jan 27 10:42:10 crc kubenswrapper[4869]: I0127 10:42:10.496290 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdqt6" event={"ID":"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7","Type":"ContainerStarted","Data":"8783a47a4d87d4f3198d682887afe0ed6eaa29f30500f8cf8ad84e46bcffced5"} Jan 27 10:42:11 crc kubenswrapper[4869]: I0127 10:42:11.033717 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:42:11 crc kubenswrapper[4869]: E0127 10:42:11.034034 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:42:11 crc kubenswrapper[4869]: I0127 10:42:11.034113 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:42:11 crc kubenswrapper[4869]: E0127 10:42:11.034439 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:42:11 crc kubenswrapper[4869]: I0127 10:42:11.382353 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-4lv4g_a494a726-4ad9-4a6a-a91a-bd3a8865d1af/kube-rbac-proxy/0.log" Jan 27 10:42:11 crc kubenswrapper[4869]: I0127 10:42:11.478450 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-4lv4g_a494a726-4ad9-4a6a-a91a-bd3a8865d1af/controller/0.log" Jan 27 10:42:11 crc kubenswrapper[4869]: I0127 10:42:11.510068 4869 generic.go:334] "Generic (PLEG): container finished" podID="c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7" containerID="8783a47a4d87d4f3198d682887afe0ed6eaa29f30500f8cf8ad84e46bcffced5" exitCode=0 Jan 27 10:42:11 crc kubenswrapper[4869]: I0127 10:42:11.510108 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdqt6" event={"ID":"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7","Type":"ContainerDied","Data":"8783a47a4d87d4f3198d682887afe0ed6eaa29f30500f8cf8ad84e46bcffced5"} Jan 27 10:42:11 crc kubenswrapper[4869]: I0127 10:42:11.615598 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/cp-frr-files/0.log" Jan 27 10:42:11 crc kubenswrapper[4869]: I0127 10:42:11.797139 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/cp-frr-files/0.log" Jan 27 10:42:11 crc kubenswrapper[4869]: I0127 10:42:11.800658 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/cp-metrics/0.log" Jan 27 10:42:11 crc kubenswrapper[4869]: I0127 10:42:11.818882 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/cp-reloader/0.log" Jan 27 10:42:11 crc kubenswrapper[4869]: I0127 10:42:11.861478 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/cp-reloader/0.log" Jan 27 10:42:12 crc kubenswrapper[4869]: I0127 10:42:12.045274 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/cp-metrics/0.log" Jan 27 10:42:12 crc kubenswrapper[4869]: I0127 10:42:12.046634 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/cp-frr-files/0.log" Jan 27 10:42:12 crc kubenswrapper[4869]: I0127 10:42:12.096603 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/cp-reloader/0.log" Jan 27 10:42:12 crc kubenswrapper[4869]: I0127 10:42:12.129974 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/cp-metrics/0.log" Jan 27 10:42:12 crc kubenswrapper[4869]: I0127 10:42:12.269770 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/cp-frr-files/0.log" Jan 27 10:42:12 crc kubenswrapper[4869]: I0127 10:42:12.273744 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/cp-reloader/0.log" Jan 27 10:42:12 crc kubenswrapper[4869]: I0127 10:42:12.289383 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/cp-metrics/0.log" Jan 27 10:42:12 crc kubenswrapper[4869]: I0127 10:42:12.326518 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/controller/0.log" Jan 27 10:42:12 crc kubenswrapper[4869]: I0127 10:42:12.449375 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/frr-metrics/0.log" Jan 27 10:42:12 crc kubenswrapper[4869]: I0127 10:42:12.464378 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/kube-rbac-proxy/0.log" Jan 27 10:42:12 crc kubenswrapper[4869]: I0127 10:42:12.495527 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/kube-rbac-proxy-frr/0.log" Jan 27 10:42:12 crc kubenswrapper[4869]: I0127 10:42:12.524507 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdqt6" event={"ID":"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7","Type":"ContainerStarted","Data":"a2f45dd22ec0e6b521e341883c747613f2498bcf2feef298c3a456eb42f38543"} Jan 27 10:42:12 crc kubenswrapper[4869]: I0127 10:42:12.544774 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rdqt6" podStartSLOduration=3.073615355 podStartE2EDuration="5.544758542s" podCreationTimestamp="2026-01-27 10:42:07 +0000 UTC" firstStartedPulling="2026-01-27 10:42:09.488589457 +0000 UTC m=+2898.109013570" lastFinishedPulling="2026-01-27 10:42:11.959732674 +0000 UTC m=+2900.580156757" observedRunningTime="2026-01-27 10:42:12.541473399 +0000 UTC m=+2901.161897482" watchObservedRunningTime="2026-01-27 10:42:12.544758542 +0000 UTC m=+2901.165182625" Jan 27 10:42:12 crc kubenswrapper[4869]: I0127 10:42:12.746210 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/reloader/0.log" Jan 27 10:42:12 crc kubenswrapper[4869]: I0127 10:42:12.792986 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-vtdm4_4558cbce-4dbb-4621-a880-674cc8ea8353/frr-k8s-webhook-server/0.log" Jan 27 10:42:12 crc kubenswrapper[4869]: I0127 10:42:12.942140 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b4fnc_96c9106e-9af3-468a-8a06-4fbc013ab6d1/frr/0.log" Jan 27 10:42:12 crc kubenswrapper[4869]: I0127 10:42:12.948527 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-66ccd9d9b6-msfrv_f8c9dcc8-f88f-4243-8be7-81ce1b582448/manager/0.log" Jan 27 10:42:13 crc kubenswrapper[4869]: I0127 10:42:13.103464 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-65584b46bc-jsdnn_a4d3a261-e179-4022-90d9-bacdc6673d2e/webhook-server/0.log" Jan 27 10:42:13 crc kubenswrapper[4869]: I0127 10:42:13.150225 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kbqgv_49d5f9df-c528-4fa0-bc0c-fba73c19add9/kube-rbac-proxy/0.log" Jan 27 10:42:13 crc kubenswrapper[4869]: I0127 10:42:13.308399 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kbqgv_49d5f9df-c528-4fa0-bc0c-fba73c19add9/speaker/0.log" Jan 27 10:42:15 crc kubenswrapper[4869]: I0127 10:42:15.697501 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:42:15 crc kubenswrapper[4869]: I0127 10:42:15.697877 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:42:18 crc kubenswrapper[4869]: I0127 10:42:18.024443 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rdqt6" Jan 27 10:42:18 crc kubenswrapper[4869]: I0127 10:42:18.024929 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rdqt6" Jan 27 10:42:18 crc kubenswrapper[4869]: I0127 10:42:18.089002 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rdqt6" Jan 27 10:42:18 crc kubenswrapper[4869]: I0127 10:42:18.628138 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rdqt6" Jan 27 10:42:18 crc kubenswrapper[4869]: I0127 10:42:18.679350 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdqt6"] Jan 27 10:42:20 crc kubenswrapper[4869]: I0127 10:42:20.581948 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rdqt6" podUID="c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7" containerName="registry-server" containerID="cri-o://a2f45dd22ec0e6b521e341883c747613f2498bcf2feef298c3a456eb42f38543" gracePeriod=2 Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.001855 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdqt6" Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.028671 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfq2z\" (UniqueName: \"kubernetes.io/projected/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-kube-api-access-pfq2z\") pod \"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7\" (UID: \"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7\") " Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.028782 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-utilities\") pod \"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7\" (UID: \"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7\") " Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.028826 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-catalog-content\") pod \"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7\" (UID: \"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7\") " Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.030079 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-utilities" (OuterVolumeSpecName: "utilities") pod "c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7" (UID: "c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.039386 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-kube-api-access-pfq2z" (OuterVolumeSpecName: "kube-api-access-pfq2z") pod "c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7" (UID: "c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7"). InnerVolumeSpecName "kube-api-access-pfq2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.050730 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7" (UID: "c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.131367 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.131405 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.131419 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfq2z\" (UniqueName: \"kubernetes.io/projected/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7-kube-api-access-pfq2z\") on node \"crc\" DevicePath \"\"" Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.593801 4869 generic.go:334] "Generic (PLEG): container finished" podID="c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7" containerID="a2f45dd22ec0e6b521e341883c747613f2498bcf2feef298c3a456eb42f38543" exitCode=0 Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.593914 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdqt6" Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.593909 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdqt6" event={"ID":"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7","Type":"ContainerDied","Data":"a2f45dd22ec0e6b521e341883c747613f2498bcf2feef298c3a456eb42f38543"} Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.594002 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdqt6" event={"ID":"c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7","Type":"ContainerDied","Data":"33af381ae1306de8a2cbcfd59697923059a68d18a020cce269e0fed55361f200"} Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.594048 4869 scope.go:117] "RemoveContainer" containerID="a2f45dd22ec0e6b521e341883c747613f2498bcf2feef298c3a456eb42f38543" Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.625367 4869 scope.go:117] "RemoveContainer" containerID="8783a47a4d87d4f3198d682887afe0ed6eaa29f30500f8cf8ad84e46bcffced5" Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.632468 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdqt6"] Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.651101 4869 scope.go:117] "RemoveContainer" containerID="536a334e20d82f3520e35f9865bd8125c98a18d8e33d92b1a6cd87f879a6b9a1" Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.653320 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdqt6"] Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.704436 4869 scope.go:117] "RemoveContainer" containerID="a2f45dd22ec0e6b521e341883c747613f2498bcf2feef298c3a456eb42f38543" Jan 27 10:42:21 crc kubenswrapper[4869]: E0127 10:42:21.706618 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2f45dd22ec0e6b521e341883c747613f2498bcf2feef298c3a456eb42f38543\": container with ID starting with a2f45dd22ec0e6b521e341883c747613f2498bcf2feef298c3a456eb42f38543 not found: ID does not exist" containerID="a2f45dd22ec0e6b521e341883c747613f2498bcf2feef298c3a456eb42f38543" Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.706671 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f45dd22ec0e6b521e341883c747613f2498bcf2feef298c3a456eb42f38543"} err="failed to get container status \"a2f45dd22ec0e6b521e341883c747613f2498bcf2feef298c3a456eb42f38543\": rpc error: code = NotFound desc = could not find container \"a2f45dd22ec0e6b521e341883c747613f2498bcf2feef298c3a456eb42f38543\": container with ID starting with a2f45dd22ec0e6b521e341883c747613f2498bcf2feef298c3a456eb42f38543 not found: ID does not exist" Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.706705 4869 scope.go:117] "RemoveContainer" containerID="8783a47a4d87d4f3198d682887afe0ed6eaa29f30500f8cf8ad84e46bcffced5" Jan 27 10:42:21 crc kubenswrapper[4869]: E0127 10:42:21.707223 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8783a47a4d87d4f3198d682887afe0ed6eaa29f30500f8cf8ad84e46bcffced5\": container with ID starting with 8783a47a4d87d4f3198d682887afe0ed6eaa29f30500f8cf8ad84e46bcffced5 not found: ID does not exist" containerID="8783a47a4d87d4f3198d682887afe0ed6eaa29f30500f8cf8ad84e46bcffced5" Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.707253 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8783a47a4d87d4f3198d682887afe0ed6eaa29f30500f8cf8ad84e46bcffced5"} err="failed to get container status \"8783a47a4d87d4f3198d682887afe0ed6eaa29f30500f8cf8ad84e46bcffced5\": rpc error: code = NotFound desc = could not find container \"8783a47a4d87d4f3198d682887afe0ed6eaa29f30500f8cf8ad84e46bcffced5\": container with ID starting with 8783a47a4d87d4f3198d682887afe0ed6eaa29f30500f8cf8ad84e46bcffced5 not found: ID does not exist" Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.707267 4869 scope.go:117] "RemoveContainer" containerID="536a334e20d82f3520e35f9865bd8125c98a18d8e33d92b1a6cd87f879a6b9a1" Jan 27 10:42:21 crc kubenswrapper[4869]: E0127 10:42:21.707722 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"536a334e20d82f3520e35f9865bd8125c98a18d8e33d92b1a6cd87f879a6b9a1\": container with ID starting with 536a334e20d82f3520e35f9865bd8125c98a18d8e33d92b1a6cd87f879a6b9a1 not found: ID does not exist" containerID="536a334e20d82f3520e35f9865bd8125c98a18d8e33d92b1a6cd87f879a6b9a1" Jan 27 10:42:21 crc kubenswrapper[4869]: I0127 10:42:21.707775 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"536a334e20d82f3520e35f9865bd8125c98a18d8e33d92b1a6cd87f879a6b9a1"} err="failed to get container status \"536a334e20d82f3520e35f9865bd8125c98a18d8e33d92b1a6cd87f879a6b9a1\": rpc error: code = NotFound desc = could not find container \"536a334e20d82f3520e35f9865bd8125c98a18d8e33d92b1a6cd87f879a6b9a1\": container with ID starting with 536a334e20d82f3520e35f9865bd8125c98a18d8e33d92b1a6cd87f879a6b9a1 not found: ID does not exist" Jan 27 10:42:22 crc kubenswrapper[4869]: I0127 10:42:22.043035 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:42:22 crc kubenswrapper[4869]: I0127 10:42:22.043276 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7" path="/var/lib/kubelet/pods/c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7/volumes" Jan 27 10:42:22 crc kubenswrapper[4869]: E0127 10:42:22.043377 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:42:25 crc kubenswrapper[4869]: I0127 10:42:25.758394 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh_45ab252a-dc37-43ef-8c03-5fc40a7d6d89/util/0.log" Jan 27 10:42:25 crc kubenswrapper[4869]: I0127 10:42:25.945543 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh_45ab252a-dc37-43ef-8c03-5fc40a7d6d89/pull/0.log" Jan 27 10:42:25 crc kubenswrapper[4869]: I0127 10:42:25.945804 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh_45ab252a-dc37-43ef-8c03-5fc40a7d6d89/pull/0.log" Jan 27 10:42:25 crc kubenswrapper[4869]: I0127 10:42:25.971054 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh_45ab252a-dc37-43ef-8c03-5fc40a7d6d89/util/0.log" Jan 27 10:42:26 crc kubenswrapper[4869]: I0127 10:42:26.034011 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:42:26 crc kubenswrapper[4869]: E0127 10:42:26.034206 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:42:26 crc kubenswrapper[4869]: I0127 10:42:26.116020 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh_45ab252a-dc37-43ef-8c03-5fc40a7d6d89/util/0.log" Jan 27 10:42:26 crc kubenswrapper[4869]: I0127 10:42:26.151704 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh_45ab252a-dc37-43ef-8c03-5fc40a7d6d89/extract/0.log" Jan 27 10:42:26 crc kubenswrapper[4869]: I0127 10:42:26.153464 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc6ftmh_45ab252a-dc37-43ef-8c03-5fc40a7d6d89/pull/0.log" Jan 27 10:42:26 crc kubenswrapper[4869]: I0127 10:42:26.282698 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l_4456d111-1f5f-4ca1-bebd-88fb3faa3033/util/0.log" Jan 27 10:42:26 crc kubenswrapper[4869]: I0127 10:42:26.441413 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l_4456d111-1f5f-4ca1-bebd-88fb3faa3033/pull/0.log" Jan 27 10:42:26 crc kubenswrapper[4869]: I0127 10:42:26.442568 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l_4456d111-1f5f-4ca1-bebd-88fb3faa3033/pull/0.log" Jan 27 10:42:26 crc kubenswrapper[4869]: I0127 10:42:26.446779 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l_4456d111-1f5f-4ca1-bebd-88fb3faa3033/util/0.log" Jan 27 10:42:26 crc kubenswrapper[4869]: I0127 10:42:26.566182 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l_4456d111-1f5f-4ca1-bebd-88fb3faa3033/util/0.log" Jan 27 10:42:26 crc kubenswrapper[4869]: I0127 10:42:26.605813 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l_4456d111-1f5f-4ca1-bebd-88fb3faa3033/extract/0.log" Jan 27 10:42:26 crc kubenswrapper[4869]: I0127 10:42:26.606458 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713j2f7l_4456d111-1f5f-4ca1-bebd-88fb3faa3033/pull/0.log" Jan 27 10:42:26 crc kubenswrapper[4869]: I0127 10:42:26.763946 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qlp6v_eadff3a0-aaea-41ca-8eca-349320b5b56c/extract-utilities/0.log" Jan 27 10:42:26 crc kubenswrapper[4869]: I0127 10:42:26.903567 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qlp6v_eadff3a0-aaea-41ca-8eca-349320b5b56c/extract-content/0.log" Jan 27 10:42:26 crc kubenswrapper[4869]: I0127 10:42:26.914330 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qlp6v_eadff3a0-aaea-41ca-8eca-349320b5b56c/extract-utilities/0.log" Jan 27 10:42:26 crc kubenswrapper[4869]: I0127 10:42:26.914330 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qlp6v_eadff3a0-aaea-41ca-8eca-349320b5b56c/extract-content/0.log" Jan 27 10:42:27 crc kubenswrapper[4869]: I0127 10:42:27.090542 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qlp6v_eadff3a0-aaea-41ca-8eca-349320b5b56c/extract-content/0.log" Jan 27 10:42:27 crc kubenswrapper[4869]: I0127 10:42:27.102037 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qlp6v_eadff3a0-aaea-41ca-8eca-349320b5b56c/extract-utilities/0.log" Jan 27 10:42:27 crc kubenswrapper[4869]: I0127 10:42:27.225778 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qlp6v_eadff3a0-aaea-41ca-8eca-349320b5b56c/registry-server/0.log" Jan 27 10:42:27 crc kubenswrapper[4869]: I0127 10:42:27.418047 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tpzwx_ec5ef27b-032b-402b-97bf-bb3b340ceccb/extract-utilities/0.log" Jan 27 10:42:27 crc kubenswrapper[4869]: I0127 10:42:27.531866 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tpzwx_ec5ef27b-032b-402b-97bf-bb3b340ceccb/extract-content/0.log" Jan 27 10:42:27 crc kubenswrapper[4869]: I0127 10:42:27.535518 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tpzwx_ec5ef27b-032b-402b-97bf-bb3b340ceccb/extract-utilities/0.log" Jan 27 10:42:27 crc kubenswrapper[4869]: I0127 10:42:27.547016 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tpzwx_ec5ef27b-032b-402b-97bf-bb3b340ceccb/extract-content/0.log" Jan 27 10:42:27 crc kubenswrapper[4869]: I0127 10:42:27.693031 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tpzwx_ec5ef27b-032b-402b-97bf-bb3b340ceccb/extract-utilities/0.log" Jan 27 10:42:27 crc kubenswrapper[4869]: I0127 10:42:27.699677 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tpzwx_ec5ef27b-032b-402b-97bf-bb3b340ceccb/extract-content/0.log" Jan 27 10:42:27 crc kubenswrapper[4869]: I0127 10:42:27.925722 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tzfz8_bf753a6b-b086-4055-8232-efcb9ed72ac6/extract-utilities/0.log" Jan 27 10:42:27 crc kubenswrapper[4869]: I0127 10:42:27.994422 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8l8tx_ddd95684-409f-4d98-8974-55d5374ee6ba/marketplace-operator/0.log" Jan 27 10:42:28 crc kubenswrapper[4869]: I0127 10:42:28.005369 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tpzwx_ec5ef27b-032b-402b-97bf-bb3b340ceccb/registry-server/0.log" Jan 27 10:42:28 crc kubenswrapper[4869]: I0127 10:42:28.169453 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tzfz8_bf753a6b-b086-4055-8232-efcb9ed72ac6/extract-content/0.log" Jan 27 10:42:28 crc kubenswrapper[4869]: I0127 10:42:28.173712 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tzfz8_bf753a6b-b086-4055-8232-efcb9ed72ac6/extract-utilities/0.log" Jan 27 10:42:28 crc kubenswrapper[4869]: I0127 10:42:28.193586 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tzfz8_bf753a6b-b086-4055-8232-efcb9ed72ac6/extract-content/0.log" Jan 27 10:42:28 crc kubenswrapper[4869]: I0127 10:42:28.314737 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tzfz8_bf753a6b-b086-4055-8232-efcb9ed72ac6/extract-utilities/0.log" Jan 27 10:42:28 crc kubenswrapper[4869]: I0127 10:42:28.363587 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tzfz8_bf753a6b-b086-4055-8232-efcb9ed72ac6/extract-content/0.log" Jan 27 10:42:28 crc kubenswrapper[4869]: I0127 10:42:28.482272 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tzfz8_bf753a6b-b086-4055-8232-efcb9ed72ac6/registry-server/0.log" Jan 27 10:42:28 crc kubenswrapper[4869]: I0127 10:42:28.530983 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zqvpq_96a927cc-df8b-4011-8eb6-ab3b2ebdda7a/extract-utilities/0.log" Jan 27 10:42:28 crc kubenswrapper[4869]: I0127 10:42:28.677730 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zqvpq_96a927cc-df8b-4011-8eb6-ab3b2ebdda7a/extract-content/0.log" Jan 27 10:42:28 crc kubenswrapper[4869]: I0127 10:42:28.681406 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zqvpq_96a927cc-df8b-4011-8eb6-ab3b2ebdda7a/extract-utilities/0.log" Jan 27 10:42:28 crc kubenswrapper[4869]: I0127 10:42:28.684668 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zqvpq_96a927cc-df8b-4011-8eb6-ab3b2ebdda7a/extract-content/0.log" Jan 27 10:42:28 crc kubenswrapper[4869]: I0127 10:42:28.861029 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zqvpq_96a927cc-df8b-4011-8eb6-ab3b2ebdda7a/extract-utilities/0.log" Jan 27 10:42:28 crc kubenswrapper[4869]: I0127 10:42:28.902885 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zqvpq_96a927cc-df8b-4011-8eb6-ab3b2ebdda7a/extract-content/0.log" Jan 27 10:42:29 crc kubenswrapper[4869]: I0127 10:42:29.285229 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zqvpq_96a927cc-df8b-4011-8eb6-ab3b2ebdda7a/registry-server/0.log" Jan 27 10:42:35 crc kubenswrapper[4869]: I0127 10:42:35.036445 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:42:35 crc kubenswrapper[4869]: E0127 10:42:35.037055 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:42:39 crc kubenswrapper[4869]: I0127 10:42:39.033313 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:42:39 crc kubenswrapper[4869]: E0127 10:42:39.034123 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:42:45 crc kubenswrapper[4869]: I0127 10:42:45.697863 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:42:45 crc kubenswrapper[4869]: I0127 10:42:45.698362 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:42:47 crc kubenswrapper[4869]: I0127 10:42:47.033268 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:42:47 crc kubenswrapper[4869]: E0127 10:42:47.033733 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:42:53 crc kubenswrapper[4869]: I0127 10:42:53.034056 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:42:53 crc kubenswrapper[4869]: E0127 10:42:53.035183 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:43:02 crc kubenswrapper[4869]: I0127 10:43:02.043748 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:43:02 crc kubenswrapper[4869]: E0127 10:43:02.045005 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:43:08 crc kubenswrapper[4869]: I0127 10:43:08.034054 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:43:08 crc kubenswrapper[4869]: E0127 10:43:08.036106 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:43:13 crc kubenswrapper[4869]: I0127 10:43:13.033430 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:43:13 crc kubenswrapper[4869]: E0127 10:43:13.034169 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:43:15 crc kubenswrapper[4869]: I0127 10:43:15.697551 4869 patch_prober.go:28] interesting pod/machine-config-daemon-k2qh9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 10:43:15 crc kubenswrapper[4869]: I0127 10:43:15.698089 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 10:43:15 crc kubenswrapper[4869]: I0127 10:43:15.698149 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" Jan 27 10:43:15 crc kubenswrapper[4869]: I0127 10:43:15.699094 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9"} pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 10:43:15 crc kubenswrapper[4869]: I0127 10:43:15.699177 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerName="machine-config-daemon" containerID="cri-o://6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" gracePeriod=600 Jan 27 10:43:15 crc kubenswrapper[4869]: E0127 10:43:15.831117 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:43:16 crc kubenswrapper[4869]: I0127 10:43:16.042542 4869 generic.go:334] "Generic (PLEG): container finished" podID="12a3e458-3f5f-46cf-b242-9a3986250bcf" containerID="6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" exitCode=0 Jan 27 10:43:16 crc kubenswrapper[4869]: I0127 10:43:16.047959 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" event={"ID":"12a3e458-3f5f-46cf-b242-9a3986250bcf","Type":"ContainerDied","Data":"6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9"} Jan 27 10:43:16 crc kubenswrapper[4869]: I0127 10:43:16.048412 4869 scope.go:117] "RemoveContainer" containerID="f26f1fb2cee5006f1f75a2a2a614b9386b95a957b8a625a62d67f3bf0077c924" Jan 27 10:43:16 crc kubenswrapper[4869]: I0127 10:43:16.049261 4869 scope.go:117] "RemoveContainer" containerID="6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" Jan 27 10:43:16 crc kubenswrapper[4869]: E0127 10:43:16.049972 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:43:21 crc kubenswrapper[4869]: I0127 10:43:21.035074 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:43:21 crc kubenswrapper[4869]: E0127 10:43:21.036077 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:43:27 crc kubenswrapper[4869]: I0127 10:43:27.033801 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:43:27 crc kubenswrapper[4869]: E0127 10:43:27.034319 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:43:28 crc kubenswrapper[4869]: I0127 10:43:28.033107 4869 scope.go:117] "RemoveContainer" containerID="6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" Jan 27 10:43:28 crc kubenswrapper[4869]: E0127 10:43:28.033612 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:43:34 crc kubenswrapper[4869]: I0127 10:43:34.043773 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:43:34 crc kubenswrapper[4869]: E0127 10:43:34.044708 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:43:39 crc kubenswrapper[4869]: I0127 10:43:39.261169 4869 generic.go:334] "Generic (PLEG): container finished" podID="293e3afd-0b77-490d-88bc-56f06235a889" containerID="0141cd4f3e2cb3559b8bbc901e639f25c8fd1a07de3896f1c2d198ecdcfc5bc7" exitCode=0 Jan 27 10:43:39 crc kubenswrapper[4869]: I0127 10:43:39.261253 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-smctz/must-gather-gm4xt" event={"ID":"293e3afd-0b77-490d-88bc-56f06235a889","Type":"ContainerDied","Data":"0141cd4f3e2cb3559b8bbc901e639f25c8fd1a07de3896f1c2d198ecdcfc5bc7"} Jan 27 10:43:39 crc kubenswrapper[4869]: I0127 10:43:39.262515 4869 scope.go:117] "RemoveContainer" containerID="0141cd4f3e2cb3559b8bbc901e639f25c8fd1a07de3896f1c2d198ecdcfc5bc7" Jan 27 10:43:39 crc kubenswrapper[4869]: I0127 10:43:39.464134 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-smctz_must-gather-gm4xt_293e3afd-0b77-490d-88bc-56f06235a889/gather/0.log" Jan 27 10:43:40 crc kubenswrapper[4869]: I0127 10:43:40.033514 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:43:40 crc kubenswrapper[4869]: I0127 10:43:40.034149 4869 scope.go:117] "RemoveContainer" containerID="6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" Jan 27 10:43:40 crc kubenswrapper[4869]: E0127 10:43:40.034156 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:43:40 crc kubenswrapper[4869]: E0127 10:43:40.034669 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:43:46 crc kubenswrapper[4869]: I0127 10:43:46.033051 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:43:46 crc kubenswrapper[4869]: E0127 10:43:46.033460 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:43:46 crc kubenswrapper[4869]: I0127 10:43:46.381045 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-smctz/must-gather-gm4xt"] Jan 27 10:43:46 crc kubenswrapper[4869]: I0127 10:43:46.381409 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-smctz/must-gather-gm4xt" podUID="293e3afd-0b77-490d-88bc-56f06235a889" containerName="copy" containerID="cri-o://b92be076f1038507985c02ab883776ca3fe5ff24cdc4929261e1448696df239a" gracePeriod=2 Jan 27 10:43:46 crc kubenswrapper[4869]: I0127 10:43:46.389083 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-smctz/must-gather-gm4xt"] Jan 27 10:43:46 crc kubenswrapper[4869]: I0127 10:43:46.757186 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-smctz_must-gather-gm4xt_293e3afd-0b77-490d-88bc-56f06235a889/copy/0.log" Jan 27 10:43:46 crc kubenswrapper[4869]: I0127 10:43:46.758431 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smctz/must-gather-gm4xt" Jan 27 10:43:46 crc kubenswrapper[4869]: I0127 10:43:46.871235 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwlgr\" (UniqueName: \"kubernetes.io/projected/293e3afd-0b77-490d-88bc-56f06235a889-kube-api-access-cwlgr\") pod \"293e3afd-0b77-490d-88bc-56f06235a889\" (UID: \"293e3afd-0b77-490d-88bc-56f06235a889\") " Jan 27 10:43:46 crc kubenswrapper[4869]: I0127 10:43:46.871294 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/293e3afd-0b77-490d-88bc-56f06235a889-must-gather-output\") pod \"293e3afd-0b77-490d-88bc-56f06235a889\" (UID: \"293e3afd-0b77-490d-88bc-56f06235a889\") " Jan 27 10:43:46 crc kubenswrapper[4869]: I0127 10:43:46.877335 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/293e3afd-0b77-490d-88bc-56f06235a889-kube-api-access-cwlgr" (OuterVolumeSpecName: "kube-api-access-cwlgr") pod "293e3afd-0b77-490d-88bc-56f06235a889" (UID: "293e3afd-0b77-490d-88bc-56f06235a889"). InnerVolumeSpecName "kube-api-access-cwlgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:43:46 crc kubenswrapper[4869]: I0127 10:43:46.971144 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/293e3afd-0b77-490d-88bc-56f06235a889-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "293e3afd-0b77-490d-88bc-56f06235a889" (UID: "293e3afd-0b77-490d-88bc-56f06235a889"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 10:43:46 crc kubenswrapper[4869]: I0127 10:43:46.974138 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwlgr\" (UniqueName: \"kubernetes.io/projected/293e3afd-0b77-490d-88bc-56f06235a889-kube-api-access-cwlgr\") on node \"crc\" DevicePath \"\"" Jan 27 10:43:46 crc kubenswrapper[4869]: I0127 10:43:46.974184 4869 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/293e3afd-0b77-490d-88bc-56f06235a889-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 10:43:47 crc kubenswrapper[4869]: I0127 10:43:47.328175 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-smctz_must-gather-gm4xt_293e3afd-0b77-490d-88bc-56f06235a889/copy/0.log" Jan 27 10:43:47 crc kubenswrapper[4869]: I0127 10:43:47.328520 4869 generic.go:334] "Generic (PLEG): container finished" podID="293e3afd-0b77-490d-88bc-56f06235a889" containerID="b92be076f1038507985c02ab883776ca3fe5ff24cdc4929261e1448696df239a" exitCode=143 Jan 27 10:43:47 crc kubenswrapper[4869]: I0127 10:43:47.328551 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-smctz/must-gather-gm4xt" Jan 27 10:43:47 crc kubenswrapper[4869]: I0127 10:43:47.328593 4869 scope.go:117] "RemoveContainer" containerID="b92be076f1038507985c02ab883776ca3fe5ff24cdc4929261e1448696df239a" Jan 27 10:43:47 crc kubenswrapper[4869]: I0127 10:43:47.345107 4869 scope.go:117] "RemoveContainer" containerID="0141cd4f3e2cb3559b8bbc901e639f25c8fd1a07de3896f1c2d198ecdcfc5bc7" Jan 27 10:43:47 crc kubenswrapper[4869]: I0127 10:43:47.408374 4869 scope.go:117] "RemoveContainer" containerID="b92be076f1038507985c02ab883776ca3fe5ff24cdc4929261e1448696df239a" Jan 27 10:43:47 crc kubenswrapper[4869]: E0127 10:43:47.408923 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b92be076f1038507985c02ab883776ca3fe5ff24cdc4929261e1448696df239a\": container with ID starting with b92be076f1038507985c02ab883776ca3fe5ff24cdc4929261e1448696df239a not found: ID does not exist" containerID="b92be076f1038507985c02ab883776ca3fe5ff24cdc4929261e1448696df239a" Jan 27 10:43:47 crc kubenswrapper[4869]: I0127 10:43:47.409753 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b92be076f1038507985c02ab883776ca3fe5ff24cdc4929261e1448696df239a"} err="failed to get container status \"b92be076f1038507985c02ab883776ca3fe5ff24cdc4929261e1448696df239a\": rpc error: code = NotFound desc = could not find container \"b92be076f1038507985c02ab883776ca3fe5ff24cdc4929261e1448696df239a\": container with ID starting with b92be076f1038507985c02ab883776ca3fe5ff24cdc4929261e1448696df239a not found: ID does not exist" Jan 27 10:43:47 crc kubenswrapper[4869]: I0127 10:43:47.409788 4869 scope.go:117] "RemoveContainer" containerID="0141cd4f3e2cb3559b8bbc901e639f25c8fd1a07de3896f1c2d198ecdcfc5bc7" Jan 27 10:43:47 crc kubenswrapper[4869]: E0127 10:43:47.410269 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0141cd4f3e2cb3559b8bbc901e639f25c8fd1a07de3896f1c2d198ecdcfc5bc7\": container with ID starting with 0141cd4f3e2cb3559b8bbc901e639f25c8fd1a07de3896f1c2d198ecdcfc5bc7 not found: ID does not exist" containerID="0141cd4f3e2cb3559b8bbc901e639f25c8fd1a07de3896f1c2d198ecdcfc5bc7" Jan 27 10:43:47 crc kubenswrapper[4869]: I0127 10:43:47.410295 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0141cd4f3e2cb3559b8bbc901e639f25c8fd1a07de3896f1c2d198ecdcfc5bc7"} err="failed to get container status \"0141cd4f3e2cb3559b8bbc901e639f25c8fd1a07de3896f1c2d198ecdcfc5bc7\": rpc error: code = NotFound desc = could not find container \"0141cd4f3e2cb3559b8bbc901e639f25c8fd1a07de3896f1c2d198ecdcfc5bc7\": container with ID starting with 0141cd4f3e2cb3559b8bbc901e639f25c8fd1a07de3896f1c2d198ecdcfc5bc7 not found: ID does not exist" Jan 27 10:43:48 crc kubenswrapper[4869]: I0127 10:43:48.042053 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="293e3afd-0b77-490d-88bc-56f06235a889" path="/var/lib/kubelet/pods/293e3afd-0b77-490d-88bc-56f06235a889/volumes" Jan 27 10:43:52 crc kubenswrapper[4869]: I0127 10:43:52.047601 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:43:52 crc kubenswrapper[4869]: E0127 10:43:52.048175 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:43:55 crc kubenswrapper[4869]: I0127 10:43:55.033700 4869 scope.go:117] "RemoveContainer" containerID="6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" Jan 27 10:43:55 crc kubenswrapper[4869]: E0127 10:43:55.034928 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:43:58 crc kubenswrapper[4869]: I0127 10:43:58.033402 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:43:58 crc kubenswrapper[4869]: E0127 10:43:58.034025 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:44:07 crc kubenswrapper[4869]: I0127 10:44:07.033173 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:44:07 crc kubenswrapper[4869]: E0127 10:44:07.034099 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:44:09 crc kubenswrapper[4869]: I0127 10:44:09.033285 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:44:09 crc kubenswrapper[4869]: I0127 10:44:09.033942 4869 scope.go:117] "RemoveContainer" containerID="6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" Jan 27 10:44:09 crc kubenswrapper[4869]: E0127 10:44:09.034088 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:44:09 crc kubenswrapper[4869]: E0127 10:44:09.034159 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:44:21 crc kubenswrapper[4869]: I0127 10:44:21.033331 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:44:21 crc kubenswrapper[4869]: E0127 10:44:21.034064 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:44:22 crc kubenswrapper[4869]: I0127 10:44:22.037129 4869 scope.go:117] "RemoveContainer" containerID="6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" Jan 27 10:44:22 crc kubenswrapper[4869]: E0127 10:44:22.037356 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:44:23 crc kubenswrapper[4869]: I0127 10:44:23.032602 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:44:23 crc kubenswrapper[4869]: E0127 10:44:23.032808 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:44:32 crc kubenswrapper[4869]: I0127 10:44:32.037107 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:44:32 crc kubenswrapper[4869]: E0127 10:44:32.037784 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:44:34 crc kubenswrapper[4869]: I0127 10:44:34.033096 4869 scope.go:117] "RemoveContainer" containerID="6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" Jan 27 10:44:34 crc kubenswrapper[4869]: E0127 10:44:34.033681 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:44:35 crc kubenswrapper[4869]: I0127 10:44:35.034692 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:44:35 crc kubenswrapper[4869]: E0127 10:44:35.037728 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:44:43 crc kubenswrapper[4869]: I0127 10:44:43.033411 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:44:43 crc kubenswrapper[4869]: E0127 10:44:43.035081 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:44:48 crc kubenswrapper[4869]: I0127 10:44:48.035613 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:44:48 crc kubenswrapper[4869]: E0127 10:44:48.036304 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:44:49 crc kubenswrapper[4869]: I0127 10:44:49.034185 4869 scope.go:117] "RemoveContainer" containerID="6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" Jan 27 10:44:49 crc kubenswrapper[4869]: E0127 10:44:49.034603 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:44:57 crc kubenswrapper[4869]: I0127 10:44:57.033746 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:44:57 crc kubenswrapper[4869]: E0127 10:44:57.034738 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.033674 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:45:00 crc kubenswrapper[4869]: E0127 10:45:00.034351 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.166679 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6"] Jan 27 10:45:00 crc kubenswrapper[4869]: E0127 10:45:00.167396 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="293e3afd-0b77-490d-88bc-56f06235a889" containerName="copy" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.167423 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="293e3afd-0b77-490d-88bc-56f06235a889" containerName="copy" Jan 27 10:45:00 crc kubenswrapper[4869]: E0127 10:45:00.167456 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7" containerName="extract-utilities" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.167469 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7" containerName="extract-utilities" Jan 27 10:45:00 crc kubenswrapper[4869]: E0127 10:45:00.167492 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7" containerName="extract-content" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.167505 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7" containerName="extract-content" Jan 27 10:45:00 crc kubenswrapper[4869]: E0127 10:45:00.167529 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="293e3afd-0b77-490d-88bc-56f06235a889" containerName="gather" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.167541 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="293e3afd-0b77-490d-88bc-56f06235a889" containerName="gather" Jan 27 10:45:00 crc kubenswrapper[4869]: E0127 10:45:00.167573 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7" containerName="registry-server" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.167584 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7" containerName="registry-server" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.167975 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c08a03d8-5435-4d4b-8c2c-dbe6fadeb4d7" containerName="registry-server" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.168011 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="293e3afd-0b77-490d-88bc-56f06235a889" containerName="copy" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.168029 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="293e3afd-0b77-490d-88bc-56f06235a889" containerName="gather" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.168801 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.170912 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.172724 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.172881 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6"] Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.309217 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgsv7\" (UniqueName: \"kubernetes.io/projected/f1aee54f-93a1-4536-9a5d-24a78257def0-kube-api-access-cgsv7\") pod \"collect-profiles-29491845-fxrl6\" (UID: \"f1aee54f-93a1-4536-9a5d-24a78257def0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.309278 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1aee54f-93a1-4536-9a5d-24a78257def0-secret-volume\") pod \"collect-profiles-29491845-fxrl6\" (UID: \"f1aee54f-93a1-4536-9a5d-24a78257def0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.309352 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1aee54f-93a1-4536-9a5d-24a78257def0-config-volume\") pod \"collect-profiles-29491845-fxrl6\" (UID: \"f1aee54f-93a1-4536-9a5d-24a78257def0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.410884 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgsv7\" (UniqueName: \"kubernetes.io/projected/f1aee54f-93a1-4536-9a5d-24a78257def0-kube-api-access-cgsv7\") pod \"collect-profiles-29491845-fxrl6\" (UID: \"f1aee54f-93a1-4536-9a5d-24a78257def0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.410934 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1aee54f-93a1-4536-9a5d-24a78257def0-secret-volume\") pod \"collect-profiles-29491845-fxrl6\" (UID: \"f1aee54f-93a1-4536-9a5d-24a78257def0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.411005 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1aee54f-93a1-4536-9a5d-24a78257def0-config-volume\") pod \"collect-profiles-29491845-fxrl6\" (UID: \"f1aee54f-93a1-4536-9a5d-24a78257def0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.412047 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1aee54f-93a1-4536-9a5d-24a78257def0-config-volume\") pod \"collect-profiles-29491845-fxrl6\" (UID: \"f1aee54f-93a1-4536-9a5d-24a78257def0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.418959 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1aee54f-93a1-4536-9a5d-24a78257def0-secret-volume\") pod \"collect-profiles-29491845-fxrl6\" (UID: \"f1aee54f-93a1-4536-9a5d-24a78257def0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.428434 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgsv7\" (UniqueName: \"kubernetes.io/projected/f1aee54f-93a1-4536-9a5d-24a78257def0-kube-api-access-cgsv7\") pod \"collect-profiles-29491845-fxrl6\" (UID: \"f1aee54f-93a1-4536-9a5d-24a78257def0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.490916 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6" Jan 27 10:45:00 crc kubenswrapper[4869]: I0127 10:45:00.933824 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6"] Jan 27 10:45:00 crc kubenswrapper[4869]: W0127 10:45:00.937888 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1aee54f_93a1_4536_9a5d_24a78257def0.slice/crio-3fed138aed2ce6b39a209eab6ec86a13c7317afe2f60cc6775caf9a4ee33924f WatchSource:0}: Error finding container 3fed138aed2ce6b39a209eab6ec86a13c7317afe2f60cc6775caf9a4ee33924f: Status 404 returned error can't find the container with id 3fed138aed2ce6b39a209eab6ec86a13c7317afe2f60cc6775caf9a4ee33924f Jan 27 10:45:01 crc kubenswrapper[4869]: I0127 10:45:01.910777 4869 generic.go:334] "Generic (PLEG): container finished" podID="f1aee54f-93a1-4536-9a5d-24a78257def0" containerID="6637842d3f076398740e712299159e38ac3d02a3f7aabd70bb9eaacbe7dd2ba0" exitCode=0 Jan 27 10:45:01 crc kubenswrapper[4869]: I0127 10:45:01.910940 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6" event={"ID":"f1aee54f-93a1-4536-9a5d-24a78257def0","Type":"ContainerDied","Data":"6637842d3f076398740e712299159e38ac3d02a3f7aabd70bb9eaacbe7dd2ba0"} Jan 27 10:45:01 crc kubenswrapper[4869]: I0127 10:45:01.911040 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6" event={"ID":"f1aee54f-93a1-4536-9a5d-24a78257def0","Type":"ContainerStarted","Data":"3fed138aed2ce6b39a209eab6ec86a13c7317afe2f60cc6775caf9a4ee33924f"} Jan 27 10:45:03 crc kubenswrapper[4869]: I0127 10:45:03.221597 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6" Jan 27 10:45:03 crc kubenswrapper[4869]: I0127 10:45:03.368512 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1aee54f-93a1-4536-9a5d-24a78257def0-secret-volume\") pod \"f1aee54f-93a1-4536-9a5d-24a78257def0\" (UID: \"f1aee54f-93a1-4536-9a5d-24a78257def0\") " Jan 27 10:45:03 crc kubenswrapper[4869]: I0127 10:45:03.368576 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1aee54f-93a1-4536-9a5d-24a78257def0-config-volume\") pod \"f1aee54f-93a1-4536-9a5d-24a78257def0\" (UID: \"f1aee54f-93a1-4536-9a5d-24a78257def0\") " Jan 27 10:45:03 crc kubenswrapper[4869]: I0127 10:45:03.368717 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgsv7\" (UniqueName: \"kubernetes.io/projected/f1aee54f-93a1-4536-9a5d-24a78257def0-kube-api-access-cgsv7\") pod \"f1aee54f-93a1-4536-9a5d-24a78257def0\" (UID: \"f1aee54f-93a1-4536-9a5d-24a78257def0\") " Jan 27 10:45:03 crc kubenswrapper[4869]: I0127 10:45:03.369230 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1aee54f-93a1-4536-9a5d-24a78257def0-config-volume" (OuterVolumeSpecName: "config-volume") pod "f1aee54f-93a1-4536-9a5d-24a78257def0" (UID: "f1aee54f-93a1-4536-9a5d-24a78257def0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 10:45:03 crc kubenswrapper[4869]: I0127 10:45:03.373364 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1aee54f-93a1-4536-9a5d-24a78257def0-kube-api-access-cgsv7" (OuterVolumeSpecName: "kube-api-access-cgsv7") pod "f1aee54f-93a1-4536-9a5d-24a78257def0" (UID: "f1aee54f-93a1-4536-9a5d-24a78257def0"). InnerVolumeSpecName "kube-api-access-cgsv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 10:45:03 crc kubenswrapper[4869]: I0127 10:45:03.374853 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1aee54f-93a1-4536-9a5d-24a78257def0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f1aee54f-93a1-4536-9a5d-24a78257def0" (UID: "f1aee54f-93a1-4536-9a5d-24a78257def0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 10:45:03 crc kubenswrapper[4869]: I0127 10:45:03.470137 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f1aee54f-93a1-4536-9a5d-24a78257def0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 10:45:03 crc kubenswrapper[4869]: I0127 10:45:03.470279 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1aee54f-93a1-4536-9a5d-24a78257def0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 10:45:03 crc kubenswrapper[4869]: I0127 10:45:03.470292 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgsv7\" (UniqueName: \"kubernetes.io/projected/f1aee54f-93a1-4536-9a5d-24a78257def0-kube-api-access-cgsv7\") on node \"crc\" DevicePath \"\"" Jan 27 10:45:03 crc kubenswrapper[4869]: I0127 10:45:03.926546 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6" event={"ID":"f1aee54f-93a1-4536-9a5d-24a78257def0","Type":"ContainerDied","Data":"3fed138aed2ce6b39a209eab6ec86a13c7317afe2f60cc6775caf9a4ee33924f"} Jan 27 10:45:03 crc kubenswrapper[4869]: I0127 10:45:03.926591 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fed138aed2ce6b39a209eab6ec86a13c7317afe2f60cc6775caf9a4ee33924f" Jan 27 10:45:03 crc kubenswrapper[4869]: I0127 10:45:03.926605 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491845-fxrl6" Jan 27 10:45:04 crc kubenswrapper[4869]: I0127 10:45:04.032900 4869 scope.go:117] "RemoveContainer" containerID="6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" Jan 27 10:45:04 crc kubenswrapper[4869]: E0127 10:45:04.033191 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:45:04 crc kubenswrapper[4869]: I0127 10:45:04.288107 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n"] Jan 27 10:45:04 crc kubenswrapper[4869]: I0127 10:45:04.294283 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491800-hfl7n"] Jan 27 10:45:06 crc kubenswrapper[4869]: I0127 10:45:06.054680 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e44d1050-60ab-468f-8716-2d74939a3820" path="/var/lib/kubelet/pods/e44d1050-60ab-468f-8716-2d74939a3820/volumes" Jan 27 10:45:09 crc kubenswrapper[4869]: I0127 10:45:09.034080 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:45:09 crc kubenswrapper[4869]: E0127 10:45:09.034979 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:45:11 crc kubenswrapper[4869]: I0127 10:45:11.033601 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:45:11 crc kubenswrapper[4869]: E0127 10:45:11.034034 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:45:17 crc kubenswrapper[4869]: I0127 10:45:17.033280 4869 scope.go:117] "RemoveContainer" containerID="6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" Jan 27 10:45:17 crc kubenswrapper[4869]: E0127 10:45:17.034411 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:45:20 crc kubenswrapper[4869]: I0127 10:45:20.032926 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:45:20 crc kubenswrapper[4869]: E0127 10:45:20.033420 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:45:26 crc kubenswrapper[4869]: I0127 10:45:26.033585 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:45:26 crc kubenswrapper[4869]: E0127 10:45:26.034651 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:45:32 crc kubenswrapper[4869]: I0127 10:45:32.042151 4869 scope.go:117] "RemoveContainer" containerID="6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" Jan 27 10:45:32 crc kubenswrapper[4869]: E0127 10:45:32.043178 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:45:35 crc kubenswrapper[4869]: I0127 10:45:35.033161 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:45:35 crc kubenswrapper[4869]: E0127 10:45:35.033961 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:45:41 crc kubenswrapper[4869]: I0127 10:45:41.033611 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:45:41 crc kubenswrapper[4869]: E0127 10:45:41.034439 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:45:43 crc kubenswrapper[4869]: I0127 10:45:43.033247 4869 scope.go:117] "RemoveContainer" containerID="6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" Jan 27 10:45:43 crc kubenswrapper[4869]: E0127 10:45:43.034336 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:45:48 crc kubenswrapper[4869]: I0127 10:45:48.034198 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:45:48 crc kubenswrapper[4869]: E0127 10:45:48.036232 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:45:53 crc kubenswrapper[4869]: I0127 10:45:53.033270 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:45:53 crc kubenswrapper[4869]: E0127 10:45:53.035343 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:45:54 crc kubenswrapper[4869]: I0127 10:45:54.679469 4869 scope.go:117] "RemoveContainer" containerID="6577c34c76ce03b9b7c660fe77d9b9d79a5590af76fc29973182e32afb76235c" Jan 27 10:45:54 crc kubenswrapper[4869]: I0127 10:45:54.703393 4869 scope.go:117] "RemoveContainer" containerID="f59bf12eb64cbc6b710c3cd82e75305dcae07e3e0bb1a5f60c96284ae23f39a8" Jan 27 10:45:58 crc kubenswrapper[4869]: I0127 10:45:58.032844 4869 scope.go:117] "RemoveContainer" containerID="6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" Jan 27 10:45:58 crc kubenswrapper[4869]: E0127 10:45:58.033568 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:46:03 crc kubenswrapper[4869]: I0127 10:46:03.032887 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:46:03 crc kubenswrapper[4869]: E0127 10:46:03.033862 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:46:06 crc kubenswrapper[4869]: I0127 10:46:06.033921 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:46:06 crc kubenswrapper[4869]: E0127 10:46:06.034373 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:46:11 crc kubenswrapper[4869]: I0127 10:46:11.033026 4869 scope.go:117] "RemoveContainer" containerID="6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" Jan 27 10:46:11 crc kubenswrapper[4869]: E0127 10:46:11.034468 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:46:17 crc kubenswrapper[4869]: I0127 10:46:17.033328 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:46:17 crc kubenswrapper[4869]: E0127 10:46:17.034432 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:46:18 crc kubenswrapper[4869]: I0127 10:46:18.033743 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:46:18 crc kubenswrapper[4869]: E0127 10:46:18.034013 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:46:26 crc kubenswrapper[4869]: I0127 10:46:26.033262 4869 scope.go:117] "RemoveContainer" containerID="6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" Jan 27 10:46:26 crc kubenswrapper[4869]: E0127 10:46:26.033812 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:46:31 crc kubenswrapper[4869]: I0127 10:46:31.034777 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:46:31 crc kubenswrapper[4869]: E0127 10:46:31.035717 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-server-0_openstack(61608a46-7d70-4a1b-ac50-6238d5bf7ad9)\"" pod="openstack/rabbitmq-server-0" podUID="61608a46-7d70-4a1b-ac50-6238d5bf7ad9" Jan 27 10:46:33 crc kubenswrapper[4869]: I0127 10:46:33.034464 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:46:33 crc kubenswrapper[4869]: I0127 10:46:33.722272 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerStarted","Data":"1cee91e8c0c79b375d0ca68ee08089b45833fc46574ed396a4dee316e7f4ca41"} Jan 27 10:46:33 crc kubenswrapper[4869]: I0127 10:46:33.722749 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 10:46:37 crc kubenswrapper[4869]: I0127 10:46:37.033257 4869 scope.go:117] "RemoveContainer" containerID="6a5e719eee11fa7182938ba5394dd5451103ebb77494fcce1358cfad696163d9" Jan 27 10:46:37 crc kubenswrapper[4869]: E0127 10:46:37.033673 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2qh9_openshift-machine-config-operator(12a3e458-3f5f-46cf-b242-9a3986250bcf)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2qh9" podUID="12a3e458-3f5f-46cf-b242-9a3986250bcf" Jan 27 10:46:37 crc kubenswrapper[4869]: I0127 10:46:37.754953 4869 generic.go:334] "Generic (PLEG): container finished" podID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" containerID="1cee91e8c0c79b375d0ca68ee08089b45833fc46574ed396a4dee316e7f4ca41" exitCode=0 Jan 27 10:46:37 crc kubenswrapper[4869]: I0127 10:46:37.755038 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80","Type":"ContainerDied","Data":"1cee91e8c0c79b375d0ca68ee08089b45833fc46574ed396a4dee316e7f4ca41"} Jan 27 10:46:37 crc kubenswrapper[4869]: I0127 10:46:37.755305 4869 scope.go:117] "RemoveContainer" containerID="738a285c2e7752989e46451681aa0f10f2dfff971e665daf2434cb2d790d851d" Jan 27 10:46:37 crc kubenswrapper[4869]: I0127 10:46:37.756234 4869 scope.go:117] "RemoveContainer" containerID="1cee91e8c0c79b375d0ca68ee08089b45833fc46574ed396a4dee316e7f4ca41" Jan 27 10:46:37 crc kubenswrapper[4869]: E0127 10:46:37.756735 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"rabbitmq\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=rabbitmq pod=rabbitmq-cell1-server-0_openstack(cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80)\"" pod="openstack/rabbitmq-cell1-server-0" podUID="cc9f9a53-b2d4-4a7f-a4ad-5fe5f6b99f80" Jan 27 10:46:42 crc kubenswrapper[4869]: I0127 10:46:42.041686 4869 scope.go:117] "RemoveContainer" containerID="50909b33bf70622d2a9d7f0bd579bd710a7b5b18bd51a721e3fa5b10fd26d85f" Jan 27 10:46:42 crc kubenswrapper[4869]: I0127 10:46:42.809930 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"61608a46-7d70-4a1b-ac50-6238d5bf7ad9","Type":"ContainerStarted","Data":"c355b109be7f2d7bb2114c214a8f2c2bb24f4735da68d27eec69d3a7c3a3ace7"} Jan 27 10:46:42 crc kubenswrapper[4869]: I0127 10:46:42.810387 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0"